Test Report: QEMU_macOS 15452

                    
                      3d1b44055adad7e03143a6f957c5b2d808d258a0:2023-07-01:29954
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.12
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.27
22 TestAddons/Setup 46.36
23 TestCertOptions 9.99
24 TestCertExpiration 195.98
25 TestDockerFlags 10.15
26 TestForceSystemdFlag 10.22
27 TestForceSystemdEnv 10.55
72 TestFunctional/parallel/ServiceCmdConnect 32.59
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
139 TestImageBuild/serial/BuildWithBuildArg 1.01
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 55.43
183 TestMountStart/serial/StartWithMountFirst 9.94
186 TestMultiNode/serial/FreshStart2Nodes 9.9
187 TestMultiNode/serial/DeployApp2Nodes 116.71
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.16
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.37
195 TestMultiNode/serial/DeleteNode 0.09
196 TestMultiNode/serial/StopMultiNode 0.14
197 TestMultiNode/serial/RestartMultiNode 5.24
198 TestMultiNode/serial/ValidateNameConflict 20.15
202 TestPreload 10.01
204 TestScheduledStopUnix 9.98
205 TestSkaffold 12.14
208 TestRunningBinaryUpgrade 173.32
210 TestKubernetesUpgrade 15.48
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.74
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.38
225 TestStoppedBinaryUpgrade/Setup 168.12
227 TestPause/serial/Start 9.79
237 TestNoKubernetes/serial/StartWithK8s 9.77
238 TestNoKubernetes/serial/StartWithStopK8s 5.32
239 TestNoKubernetes/serial/Start 5.31
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/auto/Start 9.74
246 TestNetworkPlugins/group/kindnet/Start 9.87
247 TestNetworkPlugins/group/flannel/Start 9.91
248 TestNetworkPlugins/group/enable-default-cni/Start 9.82
249 TestNetworkPlugins/group/bridge/Start 9.67
250 TestNetworkPlugins/group/kubenet/Start 9.67
251 TestNetworkPlugins/group/custom-flannel/Start 9.84
252 TestNetworkPlugins/group/calico/Start 9.69
253 TestNetworkPlugins/group/false/Start 9.78
255 TestStartStop/group/old-k8s-version/serial/FirstStart 9.85
256 TestStartStop/group/old-k8s-version/serial/DeployApp 0.08
257 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
260 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
261 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
262 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
263 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
264 TestStartStop/group/old-k8s-version/serial/Pause 0.1
266 TestStartStop/group/no-preload/serial/FirstStart 9.76
267 TestStoppedBinaryUpgrade/Upgrade 2.56
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.14
270 TestStartStop/group/embed-certs/serial/FirstStart 9.79
271 TestStartStop/group/no-preload/serial/DeployApp 0.08
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
275 TestStartStop/group/no-preload/serial/SecondStart 5.24
276 TestStartStop/group/embed-certs/serial/DeployApp 0.09
277 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
280 TestStartStop/group/embed-certs/serial/SecondStart 5.26
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
284 TestStartStop/group/no-preload/serial/Pause 0.09
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.85
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/embed-certs/serial/Pause 0.1
292 TestStartStop/group/newest-cni/serial/FirstStart 9.78
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
302 TestStartStop/group/newest-cni/serial/SecondStart 5.25
303 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
305 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
306 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (13.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.116632375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0ab03bd-3486-4fbd-9be5-441c0f92501f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-035000] minikube v1.30.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19783191-3c7a-4710-98f8-e4120710f5a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"a71d9120-3612-4b0c-99ae-2996634b238f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig"}}
	{"specversion":"1.0","id":"a6cc4cfd-95b7-49b1-b3f1-1e45c271a006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c5f616ac-731c-468b-aecd-3ebd81b8cbee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90e28bb5-8e43-44f5-abcf-63e5f4eb4435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube"}}
	{"specversion":"1.0","id":"32e35724-72e5-4656-88c4-d4854212947c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"ffe8a784-1499-4740-9415-9928ef739a46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"94dc7249-3631-479e-b3be-18389b61d116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"977d39df-1712-4515-a0d6-1e9ee9b0daa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"99289d61-b460-4a7b-9429-76db3cf20d9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-035000 in cluster download-only-035000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6359dda-c2b2-4a73-b2d5-50863c6b0a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"db363f37-74ef-45d1-9af3-dc75a93d6461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430] Decompressors:map[bz2:0x14000057df8 gz:0x14000057ee0 tar:0x14000057e00 tar.bz2:0x14000057e10 tar.gz:0x14000057e20 tar.xz:0x14000057ea0 tar.zst:0x14000057ed0 tbz2:0x14000057e10 tgz:0x140000
57e20 txz:0x14000057ea0 tzst:0x14000057ed0 xz:0x14000057ee8 zip:0x14000057ef0 zst:0x14000057f00] Getters:map[file:0x14000ad8bf0 http:0x14000a94140 https:0x14000a941e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"de200fce-2d49-4c76-a40a-c1a5c590533b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:34:45.027873    1463 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:34:45.027998    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:45.028002    1463 out.go:309] Setting ErrFile to fd 2...
	I0701 12:34:45.028004    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:45.028067    1463 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	W0701 12:34:45.028128    1463 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: no such file or directory
	I0701 12:34:45.029199    1463 out.go:303] Setting JSON to true
	I0701 12:34:45.046110    1463 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":255,"bootTime":1688239830,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:34:45.046181    1463 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:34:45.054197    1463 out.go:97] [download-only-035000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:34:45.058155    1463 out.go:169] MINIKUBE_LOCATION=15452
	I0701 12:34:45.054323    1463 notify.go:220] Checking for updates...
	W0701 12:34:45.054353    1463 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 12:34:45.067108    1463 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:34:45.070192    1463 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:34:45.073147    1463 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:34:45.076150    1463 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	W0701 12:34:45.082211    1463 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 12:34:45.082483    1463 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:34:45.087124    1463 out.go:97] Using the qemu2 driver based on user configuration
	I0701 12:34:45.087132    1463 start.go:297] selected driver: qemu2
	I0701 12:34:45.087134    1463 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:34:45.087209    1463 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:34:45.090122    1463 out.go:169] Automatically selected the socket_vmnet network
	I0701 12:34:45.096604    1463 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 12:34:45.096702    1463 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:34:45.096751    1463 cni.go:84] Creating CNI manager for ""
	I0701 12:34:45.096771    1463 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:34:45.096779    1463 start_flags.go:319] config:
	{Name:download-only-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:34:45.102651    1463 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:34:45.107165    1463 out.go:97] Downloading VM boot image ...
	I0701 12:34:45.107195    1463 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso
	I0701 12:34:50.963219    1463 out.go:97] Starting control plane node download-only-035000 in cluster download-only-035000
	I0701 12:34:50.963239    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:51.017855    1463 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:34:51.017905    1463 cache.go:57] Caching tarball of preloaded images
	I0701 12:34:51.018079    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:51.022153    1463 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0701 12:34:51.022160    1463 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:51.094968    1463 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:34:57.097110    1463 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:57.097243    1463 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:57.738151    1463 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0701 12:34:57.738320    1463 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/download-only-035000/config.json ...
	I0701 12:34:57.738341    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/download-only-035000/config.json: {Name:mk9f4d28b217eec16b8700d8bd45a47dc566dc7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:34:57.738570    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:57.738742    1463 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0701 12:34:58.074786    1463 out.go:169] 
	W0701 12:34:58.079787    1463 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430] Decompressors:map[bz2:0x14000057df8 gz:0x14000057ee0 tar:0x14000057e00 tar.bz2:0x14000057e10 tar.gz:0x14000057e20 tar.xz:0x14000057ea0 tar.zst:0x14000057ed0 tbz2:0x14000057e10 tgz:0x14000057e20 txz:0x14000057ea0 tzst:0x14000057ed0 xz:0x14000057ee8 zip:0x14000057ef0 zst:0x14000057f00] Getters:map[file:0x14000ad8bf0 http:0x14000a94140 https:0x14000a941e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0701 12:34:58.079812    1463 out_reason.go:110] 
	W0701 12:34:58.085888    1463 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:34:58.089705    1463 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-035000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (13.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.103439042s)

                                                
                                                
-- stdout --
	* [offline-docker-158000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-158000 in cluster offline-docker-158000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-158000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:49:15.890688    3189 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:49:15.890803    3189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:15.890807    3189 out.go:309] Setting ErrFile to fd 2...
	I0701 12:49:15.890809    3189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:15.890886    3189 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:49:15.892119    3189 out.go:303] Setting JSON to false
	I0701 12:49:15.908798    3189 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1125,"bootTime":1688239830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:49:15.908868    3189 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:49:15.913288    3189 out.go:177] * [offline-docker-158000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:49:15.920315    3189 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:49:15.920359    3189 notify.go:220] Checking for updates...
	I0701 12:49:15.927218    3189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:49:15.930252    3189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:49:15.933167    3189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:49:15.936164    3189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:49:15.939212    3189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:49:15.942448    3189 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:15.942492    3189 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:49:15.946211    3189 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:49:15.953190    3189 start.go:297] selected driver: qemu2
	I0701 12:49:15.953199    3189 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:49:15.953206    3189 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:49:15.955013    3189 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:49:15.958167    3189 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:49:15.961300    3189 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:49:15.961320    3189 cni.go:84] Creating CNI manager for ""
	I0701 12:49:15.961326    3189 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:49:15.961342    3189 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:49:15.961348    3189 start_flags.go:319] config:
	{Name:offline-docker-158000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-158000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0701 12:49:15.965453    3189 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:49:15.972222    3189 out.go:177] * Starting control plane node offline-docker-158000 in cluster offline-docker-158000
	I0701 12:49:15.976054    3189 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:49:15.976083    3189 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:49:15.976095    3189 cache.go:57] Caching tarball of preloaded images
	I0701 12:49:15.976155    3189 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:49:15.976160    3189 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:49:15.976219    3189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/offline-docker-158000/config.json ...
	I0701 12:49:15.976230    3189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/offline-docker-158000/config.json: {Name:mk73891570781c8743085b7c61c48de1e554a4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:49:15.976460    3189 start.go:365] acquiring machines lock for offline-docker-158000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:15.976489    3189 start.go:369] acquired machines lock for "offline-docker-158000" in 21.416µs
	I0701 12:49:15.976500    3189 start.go:93] Provisioning new machine with config: &{Name:offline-docker-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-158000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:15.976528    3189 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:15.980201    3189 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:15.994093    3189 start.go:159] libmachine.API.Create for "offline-docker-158000" (driver="qemu2")
	I0701 12:49:15.994119    3189 client.go:168] LocalClient.Create starting
	I0701 12:49:15.994192    3189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:15.994212    3189 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:15.994221    3189 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:15.994267    3189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:15.994284    3189 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:15.994289    3189 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:15.994595    3189 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:16.162248    3189 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:16.362788    3189 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:16.362799    3189 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:16.363074    3189 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:16.372023    3189 main.go:141] libmachine: STDOUT: 
	I0701 12:49:16.372040    3189 main.go:141] libmachine: STDERR: 
	I0701 12:49:16.372106    3189 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2 +20000M
	I0701 12:49:16.380008    3189 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:16.380032    3189 main.go:141] libmachine: STDERR: 
	I0701 12:49:16.380055    3189 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:16.380061    3189 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:16.380103    3189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:1d:fd:bf:ec:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:16.381612    3189 main.go:141] libmachine: STDOUT: 
	I0701 12:49:16.381626    3189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:16.381645    3189 client.go:171] LocalClient.Create took 387.527875ms
	I0701 12:49:18.383701    3189 start.go:128] duration metric: createHost completed in 2.40720375s
	I0701 12:49:18.383764    3189 start.go:83] releasing machines lock for "offline-docker-158000", held for 2.407308125s
	W0701 12:49:18.383777    3189 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:18.407498    3189 out.go:177] * Deleting "offline-docker-158000" in qemu2 ...
	W0701 12:49:18.421121    3189 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:18.421128    3189 start.go:687] Will try again in 5 seconds ...
	I0701 12:49:23.423069    3189 start.go:365] acquiring machines lock for offline-docker-158000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:23.423142    3189 start.go:369] acquired machines lock for "offline-docker-158000" in 57.125µs
	I0701 12:49:23.423162    3189 start.go:93] Provisioning new machine with config: &{Name:offline-docker-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-158000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:23.423208    3189 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:23.432220    3189 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:23.445527    3189 start.go:159] libmachine.API.Create for "offline-docker-158000" (driver="qemu2")
	I0701 12:49:23.445557    3189 client.go:168] LocalClient.Create starting
	I0701 12:49:23.445611    3189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:23.445631    3189 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:23.445639    3189 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:23.445678    3189 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:23.445692    3189 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:23.445699    3189 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:23.445950    3189 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:23.817125    3189 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:23.909905    3189 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:23.909914    3189 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:23.910087    3189 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:23.918909    3189 main.go:141] libmachine: STDOUT: 
	I0701 12:49:23.918927    3189 main.go:141] libmachine: STDERR: 
	I0701 12:49:23.919002    3189 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2 +20000M
	I0701 12:49:23.926396    3189 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:23.926409    3189 main.go:141] libmachine: STDERR: 
	I0701 12:49:23.926423    3189 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:23.926428    3189 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:23.926466    3189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:27:38:0e:cb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/offline-docker-158000/disk.qcow2
	I0701 12:49:23.927958    3189 main.go:141] libmachine: STDOUT: 
	I0701 12:49:23.927974    3189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:23.927985    3189 client.go:171] LocalClient.Create took 482.433875ms
	I0701 12:49:25.930125    3189 start.go:128] duration metric: createHost completed in 2.506911041s
	I0701 12:49:25.930183    3189 start.go:83] releasing machines lock for "offline-docker-158000", held for 2.507077583s
	W0701 12:49:25.930588    3189 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-158000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-158000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:25.938968    3189 out.go:177] 
	W0701 12:49:25.943062    3189 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:49:25.943090    3189 out.go:239] * 
	* 
	W0701 12:49:25.945724    3189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:49:25.952980    3189 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-07-01 12:49:25.967051 -0700 PDT m=+881.038737793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-158000 -n offline-docker-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-158000 -n offline-docker-158000: exit status 7 (66.4195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-158000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-158000
--- FAIL: TestOffline (10.27s)

                                                
                                    
x
+
TestAddons/Setup (46.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-889000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-889000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (46.352627042s)

                                                
                                                
-- stdout --
	* [addons-889000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-889000 in cluster addons-889000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	* Verifying ingress addon...
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	* Verifying csi-hostpath-driver addon...
	* Verifying registry addon...
	
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:35:05.486569    1543 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:35:05.486729    1543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:35:05.486732    1543 out.go:309] Setting ErrFile to fd 2...
	I0701 12:35:05.486735    1543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:35:05.486799    1543 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:35:05.487879    1543 out.go:303] Setting JSON to false
	I0701 12:35:05.502946    1543 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":275,"bootTime":1688239830,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:35:05.502996    1543 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:35:05.508232    1543 out.go:177] * [addons-889000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:35:05.514270    1543 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:35:05.514292    1543 notify.go:220] Checking for updates...
	I0701 12:35:05.519166    1543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:35:05.522154    1543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:35:05.525232    1543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:35:05.528228    1543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:35:05.531201    1543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:35:05.534352    1543 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:35:05.538194    1543 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:35:05.545200    1543 start.go:297] selected driver: qemu2
	I0701 12:35:05.545208    1543 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:35:05.545214    1543 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:35:05.547125    1543 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:35:05.550175    1543 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:35:05.553197    1543 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:35:05.553211    1543 cni.go:84] Creating CNI manager for ""
	I0701 12:35:05.553216    1543 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:35:05.553220    1543 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:35:05.553226    1543 start_flags.go:319] config:
	{Name:addons-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0701 12:35:05.557540    1543 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:35:05.565147    1543 out.go:177] * Starting control plane node addons-889000 in cluster addons-889000
	I0701 12:35:05.569191    1543 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:35:05.569219    1543 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:35:05.569232    1543 cache.go:57] Caching tarball of preloaded images
	I0701 12:35:05.569292    1543 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:35:05.569297    1543 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:35:05.569499    1543 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/config.json ...
	I0701 12:35:05.569511    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/config.json: {Name:mk00b7b26d4e5e41938417c47655eec4a1adadd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:05.569714    1543 start.go:365] acquiring machines lock for addons-889000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:35:05.569822    1543 start.go:369] acquired machines lock for "addons-889000" in 102.417µs
	I0701 12:35:05.569845    1543 start.go:93] Provisioning new machine with config: &{Name:addons-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:35:05.569877    1543 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:35:05.578209    1543 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0701 12:35:05.952014    1543 start.go:159] libmachine.API.Create for "addons-889000" (driver="qemu2")
	I0701 12:35:05.952061    1543 client.go:168] LocalClient.Create starting
	I0701 12:35:05.952240    1543 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:35:06.078979    1543 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:35:06.138862    1543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:35:06.318226    1543 main.go:141] libmachine: Creating SSH key...
	I0701 12:35:06.503880    1543 main.go:141] libmachine: Creating Disk image...
	I0701 12:35:06.503889    1543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:35:06.504794    1543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2
	I0701 12:35:06.539161    1543 main.go:141] libmachine: STDOUT: 
	I0701 12:35:06.539182    1543 main.go:141] libmachine: STDERR: 
	I0701 12:35:06.539259    1543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2 +20000M
	I0701 12:35:06.546642    1543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:35:06.546657    1543 main.go:141] libmachine: STDERR: 
	I0701 12:35:06.546673    1543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2
	I0701 12:35:06.546684    1543 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:35:06.546723    1543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:9f:7c:22:df:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/disk.qcow2
	I0701 12:35:06.613138    1543 main.go:141] libmachine: STDOUT: 
	I0701 12:35:06.613187    1543 main.go:141] libmachine: STDERR: 
	I0701 12:35:06.613193    1543 main.go:141] libmachine: Attempt 0
	I0701 12:35:06.613211    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:08.615393    1543 main.go:141] libmachine: Attempt 1
	I0701 12:35:08.615470    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:10.617710    1543 main.go:141] libmachine: Attempt 2
	I0701 12:35:10.617753    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:12.619786    1543 main.go:141] libmachine: Attempt 3
	I0701 12:35:12.619812    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:14.621834    1543 main.go:141] libmachine: Attempt 4
	I0701 12:35:14.621843    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:16.623857    1543 main.go:141] libmachine: Attempt 5
	I0701 12:35:16.623885    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:18.625947    1543 main.go:141] libmachine: Attempt 6
	I0701 12:35:18.625967    1543 main.go:141] libmachine: Searching for a6:9f:7c:22:df:64 in /var/db/dhcpd_leases ...
	I0701 12:35:18.626054    1543 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0701 12:35:18.626083    1543 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:35:18.626087    1543 main.go:141] libmachine: Found match: a6:9f:7c:22:df:64
	I0701 12:35:18.626096    1543 main.go:141] libmachine: IP: 192.168.105.2
	I0701 12:35:18.626102    1543 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0701 12:35:19.636793    1543 machine.go:88] provisioning docker machine ...
	I0701 12:35:19.636827    1543 buildroot.go:166] provisioning hostname "addons-889000"
	I0701 12:35:19.637949    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:19.638516    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:19.638528    1543 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-889000 && echo "addons-889000" | sudo tee /etc/hostname
	I0701 12:35:19.668773    1543 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0701 12:35:22.786479    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-889000
	
	I0701 12:35:22.786618    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:22.787169    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:22.787188    1543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-889000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-889000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-889000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:35:22.868064    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:35:22.868089    1543 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1041/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1041/.minikube}
	I0701 12:35:22.868111    1543 buildroot.go:174] setting up certificates
	I0701 12:35:22.868124    1543 provision.go:83] configureAuth start
	I0701 12:35:22.868130    1543 provision.go:138] copyHostCerts
	I0701 12:35:22.868346    1543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem (1078 bytes)
	I0701 12:35:22.868660    1543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem (1123 bytes)
	I0701 12:35:22.868828    1543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem (1679 bytes)
	I0701 12:35:22.868963    1543 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem org=jenkins.addons-889000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-889000]
	I0701 12:35:23.036931    1543 provision.go:172] copyRemoteCerts
	I0701 12:35:23.037030    1543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:35:23.037051    1543 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/id_rsa Username:docker}
	I0701 12:35:23.072619    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:35:23.080257    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0701 12:35:23.087852    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:35:23.094957    1543 provision.go:86] duration metric: configureAuth took 226.831917ms
	I0701 12:35:23.094965    1543 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:35:23.095070    1543 config.go:182] Loaded profile config "addons-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:35:23.095103    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:23.095320    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:23.095325    1543 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:35:23.162688    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:35:23.162695    1543 buildroot.go:70] root file system type: tmpfs
	I0701 12:35:23.162755    1543 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:35:23.162806    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:23.163070    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:23.163112    1543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:35:23.235702    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:35:23.235777    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:23.236080    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:23.236090    1543 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:35:23.580133    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:35:23.580147    1543 machine.go:91] provisioned docker machine in 3.94341475s
	I0701 12:35:23.580152    1543 client.go:171] LocalClient.Create took 17.628418375s
	I0701 12:35:23.580163    1543 start.go:167] duration metric: libmachine.API.Create for "addons-889000" took 17.628497084s
	I0701 12:35:23.580167    1543 start.go:300] post-start starting for "addons-889000" (driver="qemu2")
	I0701 12:35:23.580171    1543 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:35:23.580231    1543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:35:23.580239    1543 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/id_rsa Username:docker}
	I0701 12:35:23.615716    1543 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:35:23.617269    1543 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 12:35:23.617277    1543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/addons for local assets ...
	I0701 12:35:23.617337    1543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/files for local assets ...
	I0701 12:35:23.617364    1543 start.go:303] post-start completed in 37.195083ms
	I0701 12:35:23.617700    1543 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/config.json ...
	I0701 12:35:23.617845    1543 start.go:128] duration metric: createHost completed in 18.048306125s
	I0701 12:35:23.617883    1543 main.go:141] libmachine: Using SSH client type: native
	I0701 12:35:23.618095    1543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f61100] 0x104f63b60 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0701 12:35:23.618099    1543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0701 12:35:23.684145    1543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688240123.716650795
	
	I0701 12:35:23.684155    1543 fix.go:206] guest clock: 1688240123.716650795
	I0701 12:35:23.684159    1543 fix.go:219] Guest: 2023-07-01 12:35:23.716650795 -0700 PDT Remote: 2023-07-01 12:35:23.61785 -0700 PDT m=+18.150864543 (delta=98.800795ms)
	I0701 12:35:23.684169    1543 fix.go:190] guest clock delta is within tolerance: 98.800795ms
	I0701 12:35:23.684172    1543 start.go:83] releasing machines lock for "addons-889000", held for 18.11468775s
	I0701 12:35:23.684524    1543 ssh_runner.go:195] Run: cat /version.json
	I0701 12:35:23.684534    1543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:35:23.684534    1543 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/id_rsa Username:docker}
	I0701 12:35:23.684570    1543 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/id_rsa Username:docker}
	I0701 12:35:23.725897    1543 ssh_runner.go:195] Run: systemctl --version
	I0701 12:35:23.767863    1543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:35:23.769820    1543 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:35:23.769853    1543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:35:23.775356    1543 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:35:23.775365    1543 start.go:466] detecting cgroup driver to use...
	I0701 12:35:23.775458    1543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:35:23.781214    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:35:23.784296    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:35:23.787485    1543 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:35:23.787515    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:35:23.790343    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:35:23.793145    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:35:23.796406    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:35:23.799601    1543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:35:23.802528    1543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:35:23.805323    1543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:35:23.808375    1543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:35:23.811398    1543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:35:23.872079    1543 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:35:23.881606    1543 start.go:466] detecting cgroup driver to use...
	I0701 12:35:23.881674    1543 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:35:23.887457    1543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:35:23.892231    1543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:35:23.898195    1543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:35:23.902974    1543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:35:23.907070    1543 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:35:23.964719    1543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:35:23.970149    1543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:35:23.975880    1543 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:35:23.977301    1543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:35:23.980448    1543 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:35:23.985834    1543 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:35:24.051355    1543 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:35:24.126850    1543 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:35:24.126865    1543 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0701 12:35:24.132189    1543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:35:24.208504    1543 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:35:25.373498    1543 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164996791s)
	I0701 12:35:25.373562    1543 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:35:25.436275    1543 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:35:25.499369    1543 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:35:25.564408    1543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:35:25.631238    1543 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:35:25.637933    1543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:35:25.703322    1543 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0701 12:35:25.727987    1543 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:35:25.728083    1543 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:35:25.730210    1543 start.go:534] Will wait 60s for crictl version
	I0701 12:35:25.730250    1543 ssh_runner.go:195] Run: which crictl
	I0701 12:35:25.731654    1543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:35:25.746171    1543 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0701 12:35:25.746249    1543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:35:25.755903    1543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:35:25.767669    1543 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0701 12:35:25.767844    1543 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0701 12:35:25.769107    1543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:35:25.772641    1543 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:35:25.772680    1543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:35:25.777787    1543 docker.go:636] Got preloaded images: 
	I0701 12:35:25.777794    1543 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0701 12:35:25.777831    1543 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:35:25.780843    1543 ssh_runner.go:195] Run: which lz4
	I0701 12:35:25.782154    1543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0701 12:35:25.783384    1543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 12:35:25.783396    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0701 12:35:27.060024    1543 docker.go:600] Took 1.277943 seconds to copy over tarball
	I0701 12:35:27.060080    1543 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 12:35:28.082406    1543 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.022327834s)
	I0701 12:35:28.082421    1543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 12:35:28.097524    1543 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:35:28.100680    1543 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0701 12:35:28.105731    1543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:35:28.176235    1543 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:35:29.765029    1543 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.588809083s)
	I0701 12:35:29.765117    1543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:35:29.771039    1543 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 12:35:29.771048    1543 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:35:29.771119    1543 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:35:29.779098    1543 cni.go:84] Creating CNI manager for ""
	I0701 12:35:29.779107    1543 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:35:29.779112    1543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 12:35:29.779120    1543 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-889000 NodeName:addons-889000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:35:29.779190    1543 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-889000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:35:29.779229    1543 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-889000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 12:35:29.779283    1543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0701 12:35:29.782307    1543 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:35:29.782340    1543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 12:35:29.784804    1543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0701 12:35:29.790149    1543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:35:29.794943    1543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0701 12:35:29.800167    1543 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0701 12:35:29.801618    1543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:35:29.805263    1543 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000 for IP: 192.168.105.2
	I0701 12:35:29.805272    1543 certs.go:190] acquiring lock for shared ca certs: {Name:mk0d2f6007eea276ce17a3a9c6aca904411113ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:29.805418    1543 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key
	I0701 12:35:29.927221    1543 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt ...
	I0701 12:35:29.927226    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt: {Name:mkf7bbb9c9b79217035611f80aa051a91ab46279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:29.927463    1543 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key ...
	I0701 12:35:29.927467    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key: {Name:mkc3038ebf27a089ebd687c8241a7d500116c704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:29.927583    1543 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key
	I0701 12:35:30.117914    1543 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt ...
	I0701 12:35:30.117926    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt: {Name:mkb136eb4017c40ae8d684afc692956b450a3c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.118175    1543 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key ...
	I0701 12:35:30.118179    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key: {Name:mkfee1ab504f69cd6c5192b7e3cc299bb4d7c80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.118323    1543 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.key
	I0701 12:35:30.118329    1543 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.crt with IP's: []
	I0701 12:35:30.239179    1543 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.crt ...
	I0701 12:35:30.239184    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.crt: {Name:mka28db3307e9b90d8f9f830cfe31e2298e452a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.239372    1543 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.key ...
	I0701 12:35:30.239379    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/client.key: {Name:mk717d10540f1a892afda1abb5d70027c0d31c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.239497    1543 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key.96055969
	I0701 12:35:30.239511    1543 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 12:35:30.358446    1543 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt.96055969 ...
	I0701 12:35:30.358453    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt.96055969: {Name:mk2269943de122b284d1bf8fe6d0904852d6d63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.358596    1543 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key.96055969 ...
	I0701 12:35:30.358599    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key.96055969: {Name:mk78c0e84a90d7a3790cd0fccd3a34d1f4d7d1f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.358726    1543 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt
	I0701 12:35:30.358859    1543 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key
	I0701 12:35:30.358946    1543 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.key
	I0701 12:35:30.358961    1543 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.crt with IP's: []
	I0701 12:35:30.468275    1543 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.crt ...
	I0701 12:35:30.468278    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.crt: {Name:mk4975394141dee92cbe479a2587bbe0ca445063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.468423    1543 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.key ...
	I0701 12:35:30.468428    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.key: {Name:mk66f8c305923218899d062fda8a1bc102ff0b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:30.468711    1543 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 12:35:30.468744    1543 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:35:30.468766    1543 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:35:30.468788    1543 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem (1679 bytes)
	I0701 12:35:30.469147    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 12:35:30.477315    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 12:35:30.484200    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:35:30.491438    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/addons-889000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:35:30.498723    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:35:30.505365    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:35:30.512524    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:35:30.519650    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:35:30.526482    1543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:35:30.533037    1543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:35:30.539361    1543 ssh_runner.go:195] Run: openssl version
	I0701 12:35:30.541269    1543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:35:30.544592    1543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:35:30.546386    1543 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  1 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:35:30.546410    1543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:35:30.548356    1543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:35:30.551596    1543 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0701 12:35:30.552937    1543 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0701 12:35:30.552976    1543 kubeadm.go:404] StartCluster: {Name:addons-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:35:30.553037    1543 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:35:30.558794    1543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 12:35:30.561690    1543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 12:35:30.564530    1543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 12:35:30.567622    1543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 12:35:30.567636    1543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 12:35:30.588457    1543 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0701 12:35:30.588535    1543 kubeadm.go:322] [preflight] Running pre-flight checks
	I0701 12:35:30.642633    1543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 12:35:30.642690    1543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 12:35:30.642738    1543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 12:35:30.707199    1543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 12:35:30.713868    1543 out.go:204]   - Generating certificates and keys ...
	I0701 12:35:30.713898    1543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0701 12:35:30.713923    1543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0701 12:35:30.759978    1543 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0701 12:35:31.045860    1543 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0701 12:35:31.328174    1543 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0701 12:35:31.561720    1543 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0701 12:35:31.739511    1543 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0701 12:35:31.739568    1543 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-889000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0701 12:35:31.878526    1543 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0701 12:35:31.878581    1543 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-889000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0701 12:35:31.944707    1543 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0701 12:35:32.042780    1543 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0701 12:35:32.135388    1543 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0701 12:35:32.135420    1543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 12:35:32.228762    1543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 12:35:32.317514    1543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 12:35:32.421389    1543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 12:35:32.555575    1543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 12:35:32.562422    1543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 12:35:32.562479    1543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 12:35:32.562507    1543 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0701 12:35:32.623072    1543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 12:35:32.627288    1543 out.go:204]   - Booting up control plane ...
	I0701 12:35:32.627348    1543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 12:35:32.627394    1543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 12:35:32.627872    1543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 12:35:32.627934    1543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 12:35:32.628006    1543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 12:35:37.132280    1543 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.503756 seconds
	I0701 12:35:37.132486    1543 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 12:35:37.149774    1543 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 12:35:37.663819    1543 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 12:35:37.664054    1543 kubeadm.go:322] [mark-control-plane] Marking the node addons-889000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 12:35:38.170800    1543 kubeadm.go:322] [bootstrap-token] Using token: ygiixv.kb5lrs7axqsj44ub
	I0701 12:35:38.177199    1543 out.go:204]   - Configuring RBAC rules ...
	I0701 12:35:38.177280    1543 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 12:35:38.177979    1543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 12:35:38.181238    1543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 12:35:38.182548    1543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 12:35:38.184526    1543 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 12:35:38.186357    1543 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 12:35:38.191118    1543 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 12:35:38.330227    1543 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0701 12:35:38.580348    1543 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0701 12:35:38.580909    1543 kubeadm.go:322] 
	I0701 12:35:38.580939    1543 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0701 12:35:38.580943    1543 kubeadm.go:322] 
	I0701 12:35:38.580974    1543 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0701 12:35:38.580976    1543 kubeadm.go:322] 
	I0701 12:35:38.580988    1543 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0701 12:35:38.581014    1543 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 12:35:38.581038    1543 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 12:35:38.581040    1543 kubeadm.go:322] 
	I0701 12:35:38.581062    1543 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0701 12:35:38.581086    1543 kubeadm.go:322] 
	I0701 12:35:38.581107    1543 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 12:35:38.581111    1543 kubeadm.go:322] 
	I0701 12:35:38.581132    1543 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0701 12:35:38.581187    1543 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 12:35:38.581225    1543 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 12:35:38.581229    1543 kubeadm.go:322] 
	I0701 12:35:38.581273    1543 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 12:35:38.581309    1543 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0701 12:35:38.581314    1543 kubeadm.go:322] 
	I0701 12:35:38.581348    1543 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ygiixv.kb5lrs7axqsj44ub \
	I0701 12:35:38.581397    1543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 \
	I0701 12:35:38.581411    1543 kubeadm.go:322] 	--control-plane 
	I0701 12:35:38.581413    1543 kubeadm.go:322] 
	I0701 12:35:38.581453    1543 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0701 12:35:38.581456    1543 kubeadm.go:322] 
	I0701 12:35:38.581490    1543 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ygiixv.kb5lrs7axqsj44ub \
	I0701 12:35:38.581555    1543 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 
	I0701 12:35:38.581641    1543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 12:35:38.581654    1543 cni.go:84] Creating CNI manager for ""
	I0701 12:35:38.581665    1543 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:35:38.587256    1543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 12:35:38.591353    1543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 12:35:38.594757    1543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0701 12:35:38.600245    1543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 12:35:38.600306    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:38.600338    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2 minikube.k8s.io/name=addons-889000 minikube.k8s.io/updated_at=2023_07_01T12_35_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:38.661154    1543 ops.go:34] apiserver oom_adj: -16
	I0701 12:35:38.661186    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:39.195675    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:39.695094    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:40.195704    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:40.695612    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:41.195815    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:41.695696    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:42.195800    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:42.695790    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:43.195709    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:43.695772    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:44.195703    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:44.695746    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:45.195695    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:45.695473    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:46.195486    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:46.695508    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:47.195727    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:47.695506    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:48.195520    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:48.695408    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:49.195438    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:49.695441    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:50.194788    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:50.695402    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:51.195415    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:51.695375    1543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:35:51.747137    1543 kubeadm.go:1081] duration metric: took 13.147134542s to wait for elevateKubeSystemPrivileges.
	I0701 12:35:51.747151    1543 kubeadm.go:406] StartCluster complete in 21.194577084s
	I0701 12:35:51.747161    1543 settings.go:142] acquiring lock: {Name:mk1853b69cc489034eba1c68e94bf3f8bc0ceb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:51.747325    1543 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:35:51.747501    1543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/kubeconfig: {Name:mk6d6ec6f258eefdfd78eed77d0a2eac619f380e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:35:51.747694    1543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 12:35:51.747740    1543 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0701 12:35:51.747799    1543 addons.go:66] Setting volumesnapshots=true in profile "addons-889000"
	I0701 12:35:51.747801    1543 addons.go:66] Setting ingress=true in profile "addons-889000"
	I0701 12:35:51.747806    1543 addons.go:228] Setting addon volumesnapshots=true in "addons-889000"
	I0701 12:35:51.747807    1543 addons.go:228] Setting addon ingress=true in "addons-889000"
	I0701 12:35:51.747834    1543 config.go:182] Loaded profile config "addons-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:35:51.747837    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.747848    1543 addons.go:66] Setting metrics-server=true in profile "addons-889000"
	I0701 12:35:51.747857    1543 addons.go:228] Setting addon metrics-server=true in "addons-889000"
	I0701 12:35:51.747860    1543 addons.go:66] Setting ingress-dns=true in profile "addons-889000"
	I0701 12:35:51.747865    1543 addons.go:228] Setting addon ingress-dns=true in "addons-889000"
	I0701 12:35:51.747878    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.747887    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.747910    1543 addons.go:66] Setting registry=true in profile "addons-889000"
	I0701 12:35:51.747922    1543 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-889000"
	I0701 12:35:51.747940    1543 addons.go:228] Setting addon registry=true in "addons-889000"
	I0701 12:35:51.747954    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.747979    1543 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-889000"
	I0701 12:35:51.747988    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.748018    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.748146    1543 addons.go:66] Setting default-storageclass=true in profile "addons-889000"
	I0701 12:35:51.748148    1543 addons.go:66] Setting inspektor-gadget=true in profile "addons-889000"
	W0701 12:35:51.748147    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	I0701 12:35:51.748152    1543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-889000"
	I0701 12:35:51.748154    1543 addons.go:66] Setting gcp-auth=true in profile "addons-889000"
	W0701 12:35:51.748155    1543 addons.go:274] "addons-889000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0701 12:35:51.748159    1543 addons.go:464] Verifying addon metrics-server=true in "addons-889000"
	I0701 12:35:51.748160    1543 mustload.go:65] Loading cluster: addons-889000
	I0701 12:35:51.748162    1543 addons.go:66] Setting cloud-spanner=true in profile "addons-889000"
	I0701 12:35:51.748166    1543 addons.go:228] Setting addon cloud-spanner=true in "addons-889000"
	I0701 12:35:51.748179    1543 host.go:66] Checking if "addons-889000" exists ...
	I0701 12:35:51.748227    1543 config.go:182] Loaded profile config "addons-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0701 12:35:51.748282    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.748287    1543 addons_storage_classes.go:55] "addons-889000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0701 12:35:51.748289    1543 addons.go:228] Setting addon default-storageclass=true in "addons-889000"
	I0701 12:35:51.748294    1543 host.go:66] Checking if "addons-889000" exists ...
	W0701 12:35:51.748389    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.748393    1543 addons.go:274] "addons-889000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0701 12:35:51.751625    1543 out.go:177] 
	I0701 12:35:51.748146    1543 addons.go:66] Setting storage-provisioner=true in profile "addons-889000"
	I0701 12:35:51.748152    1543 addons.go:228] Setting addon inspektor-gadget=true in "addons-889000"
	W0701 12:35:51.748534    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.748560    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.748560    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.748629    1543 host.go:54] host status for "addons-889000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.754640    1543 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	W0701 12:35:51.754713    1543 addons.go:274] "addons-889000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I0701 12:35:51.754720    1543 addons.go:228] Setting addon storage-provisioner=true in "addons-889000"
	I0701 12:35:51.754740    1543 host.go:66] Checking if "addons-889000" exists ...
	W0701 12:35:51.754740    1543 addons.go:274] "addons-889000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0701 12:35:51.754743    1543 addons.go:274] "addons-889000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0701 12:35:51.754744    1543 addons.go:274] "addons-889000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0701 12:35:51.758501    1543 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/addons-889000/monitor: connect: connection refused
	I0701 12:35:51.761623    1543 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0701 12:35:51.761632    1543 out.go:239] * 
	I0701 12:35:51.761675    1543 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-889000"
	I0701 12:35:51.761673    1543 addons.go:464] Verifying addon ingress=true in "addons-889000"
	I0701 12:35:51.761679    1543 addons.go:464] Verifying addon registry=true in "addons-889000"
	I0701 12:35:51.761738    1543 host.go:66] Checking if "addons-889000" exists ...
	* 
	I0701 12:35:51.765573    1543 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0701 12:35:51.771681    1543 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	W0701 12:35:51.772197    1543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:35:51.775625    1543 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	I0701 12:35:51.775635    1543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0701 12:35:51.782483    1543 out.go:177] * Verifying ingress addon...
	I0701 12:35:51.782491    1543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0701 12:35:51.791560    1543 out.go:177] * Verifying registry addon...
	I0701 12:35:51.795549    1543 out.go:177] * Verifying csi-hostpath-driver addon...
	I0701 12:35:51.804497    1543 out.go:177] 
	I0701 12:35:51.801601    1543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-889000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (46.36s)

                                                
                                    
x
+
TestCertOptions (9.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.714966708s)

                                                
                                                
-- stdout --
	* [cert-options-297000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-297000 in cluster cert-options-297000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-297000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-297000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (80.831ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-297000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-297000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-297000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-297000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.217333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-297000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-07-01 12:49:56.70385 -0700 PDT m=+911.776117710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-297000 -n cert-options-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-297000 -n cert-options-297000: exit status 7 (28.053333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-297000
--- FAIL: TestCertOptions (9.99s)

                                                
                                    
x
+
TestCertExpiration (195.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.957189s)

                                                
                                                
-- stdout --
	* [cert-expiration-117000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-117000 in cluster cert-expiration-117000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-117000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.858348958s)

                                                
                                                
-- stdout --
	* [cert-expiration-117000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-117000 in cluster cert-expiration-117000
	* Restarting existing qemu2 VM for "cert-expiration-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-117000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-117000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-117000 in cluster cert-expiration-117000
	* Restarting existing qemu2 VM for "cert-expiration-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-07-01 12:52:57.612578 -0700 PDT m=+1092.688263335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-117000 -n cert-expiration-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-117000 -n cert-expiration-117000: exit status 7 (50.218583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-117000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-117000
--- FAIL: TestCertExpiration (195.98s)

                                                
                                    
x
+
TestDockerFlags (10.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-424000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-424000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.900559584s)

                                                
                                                
-- stdout --
	* [docker-flags-424000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-424000 in cluster docker-flags-424000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:49:36.713742    3391 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:49:36.713870    3391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:36.713874    3391 out.go:309] Setting ErrFile to fd 2...
	I0701 12:49:36.713877    3391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:36.713940    3391 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:49:36.714941    3391 out.go:303] Setting JSON to false
	I0701 12:49:36.730001    3391 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1146,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:49:36.730072    3391 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:49:36.734752    3391 out.go:177] * [docker-flags-424000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:49:36.741697    3391 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:49:36.746223    3391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:49:36.741755    3391 notify.go:220] Checking for updates...
	I0701 12:49:36.750661    3391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:49:36.753727    3391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:49:36.756701    3391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:49:36.759626    3391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:49:36.763032    3391 config.go:182] Loaded profile config "force-systemd-flag-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:36.763098    3391 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:36.763170    3391 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:49:36.767700    3391 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:49:36.774667    3391 start.go:297] selected driver: qemu2
	I0701 12:49:36.774672    3391 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:49:36.774677    3391 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:49:36.776549    3391 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:49:36.779739    3391 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:49:36.782694    3391 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0701 12:49:36.782717    3391 cni.go:84] Creating CNI manager for ""
	I0701 12:49:36.782723    3391 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:49:36.782733    3391 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:49:36.782740    3391 start_flags.go:319] config:
	{Name:docker-flags-424000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-424000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:49:36.786887    3391 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:49:36.793557    3391 out.go:177] * Starting control plane node docker-flags-424000 in cluster docker-flags-424000
	I0701 12:49:36.797662    3391 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:49:36.797687    3391 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:49:36.797697    3391 cache.go:57] Caching tarball of preloaded images
	I0701 12:49:36.797778    3391 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:49:36.797784    3391 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:49:36.797847    3391 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/docker-flags-424000/config.json ...
	I0701 12:49:36.797859    3391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/docker-flags-424000/config.json: {Name:mk6031d0d4a2ed22119718cc3c3dd66c67127d85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:49:36.798052    3391 start.go:365] acquiring machines lock for docker-flags-424000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:36.798082    3391 start.go:369] acquired machines lock for "docker-flags-424000" in 23.084µs
	I0701 12:49:36.798095    3391 start.go:93] Provisioning new machine with config: &{Name:docker-flags-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-424000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:36.798125    3391 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:36.806710    3391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:36.822756    3391 start.go:159] libmachine.API.Create for "docker-flags-424000" (driver="qemu2")
	I0701 12:49:36.822778    3391 client.go:168] LocalClient.Create starting
	I0701 12:49:36.822840    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:36.822862    3391 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:36.822872    3391 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:36.822900    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:36.822915    3391 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:36.822923    3391 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:36.823200    3391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:37.037588    3391 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:37.131973    3391 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:37.131981    3391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:37.132148    3391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:37.140559    3391 main.go:141] libmachine: STDOUT: 
	I0701 12:49:37.140584    3391 main.go:141] libmachine: STDERR: 
	I0701 12:49:37.140643    3391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2 +20000M
	I0701 12:49:37.147719    3391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:37.147732    3391 main.go:141] libmachine: STDERR: 
	I0701 12:49:37.147751    3391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:37.147758    3391 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:37.147812    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:51:8c:62:7c:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:37.149293    3391 main.go:141] libmachine: STDOUT: 
	I0701 12:49:37.149311    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:37.149331    3391 client.go:171] LocalClient.Create took 326.552375ms
	I0701 12:49:39.151673    3391 start.go:128] duration metric: createHost completed in 2.353510334s
	I0701 12:49:39.151755    3391 start.go:83] releasing machines lock for "docker-flags-424000", held for 2.353707791s
	W0701 12:49:39.151837    3391 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:39.168887    3391 out.go:177] * Deleting "docker-flags-424000" in qemu2 ...
	W0701 12:49:39.183402    3391 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:39.183430    3391 start.go:687] Will try again in 5 seconds ...
	I0701 12:49:44.185600    3391 start.go:365] acquiring machines lock for docker-flags-424000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:44.240849    3391 start.go:369] acquired machines lock for "docker-flags-424000" in 55.14325ms
	I0701 12:49:44.241021    3391 start.go:93] Provisioning new machine with config: &{Name:docker-flags-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-424000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:44.241323    3391 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:44.249012    3391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:44.297381    3391 start.go:159] libmachine.API.Create for "docker-flags-424000" (driver="qemu2")
	I0701 12:49:44.297416    3391 client.go:168] LocalClient.Create starting
	I0701 12:49:44.297583    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:44.297631    3391 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:44.297656    3391 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:44.297762    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:44.297794    3391 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:44.297809    3391 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:44.298438    3391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:44.436960    3391 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:44.525472    3391 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:44.525478    3391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:44.525623    3391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:44.534229    3391 main.go:141] libmachine: STDOUT: 
	I0701 12:49:44.534247    3391 main.go:141] libmachine: STDERR: 
	I0701 12:49:44.534303    3391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2 +20000M
	I0701 12:49:44.541458    3391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:44.541470    3391 main.go:141] libmachine: STDERR: 
	I0701 12:49:44.541492    3391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:44.541499    3391 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:44.541545    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:90:42:b4:d5:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/docker-flags-424000/disk.qcow2
	I0701 12:49:44.542999    3391 main.go:141] libmachine: STDOUT: 
	I0701 12:49:44.543013    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:44.543023    3391 client.go:171] LocalClient.Create took 245.606792ms
	I0701 12:49:46.545153    3391 start.go:128] duration metric: createHost completed in 2.303844875s
	I0701 12:49:46.545256    3391 start.go:83] releasing machines lock for "docker-flags-424000", held for 2.304391416s
	W0701 12:49:46.545655    3391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:46.556287    3391 out.go:177] 
	W0701 12:49:46.562424    3391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:49:46.562450    3391 out.go:239] * 
	* 
	W0701 12:49:46.565048    3391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:49:46.574225    3391 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-424000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-424000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-424000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (80.046542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-424000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-424000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-424000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-424000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-424000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-424000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (42.558625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-424000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-424000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-424000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-424000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-07-01 12:49:46.713165 -0700 PDT m=+901.785244126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-424000 -n docker-flags-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-424000 -n docker-flags-424000: exit status 7 (27.858167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-424000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-424000
--- FAIL: TestDockerFlags (10.15s)

                                                
                                    
x
+
TestForceSystemdFlag (10.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-243000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-243000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.008957291s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-243000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-243000 in cluster force-systemd-flag-243000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-243000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:49:31.604775    3367 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:49:31.604924    3367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:31.604927    3367 out.go:309] Setting ErrFile to fd 2...
	I0701 12:49:31.604929    3367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:31.604992    3367 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:49:31.606000    3367 out.go:303] Setting JSON to false
	I0701 12:49:31.620965    3367 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1141,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:49:31.621026    3367 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:49:31.625996    3367 out.go:177] * [force-systemd-flag-243000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:49:31.633913    3367 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:49:31.636883    3367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:49:31.633967    3367 notify.go:220] Checking for updates...
	I0701 12:49:31.645911    3367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:49:31.648897    3367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:49:31.651891    3367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:49:31.654931    3367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:49:31.656536    3367 config.go:182] Loaded profile config "force-systemd-env-232000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:31.656609    3367 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:31.656647    3367 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:49:31.660918    3367 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:49:31.667762    3367 start.go:297] selected driver: qemu2
	I0701 12:49:31.667769    3367 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:49:31.667774    3367 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:49:31.669740    3367 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:49:31.672918    3367 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:49:31.675981    3367 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:49:31.675997    3367 cni.go:84] Creating CNI manager for ""
	I0701 12:49:31.676003    3367 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:49:31.676007    3367 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:49:31.676011    3367 start_flags.go:319] config:
	{Name:force-systemd-flag-243000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-243000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:49:31.680501    3367 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:49:31.687913    3367 out.go:177] * Starting control plane node force-systemd-flag-243000 in cluster force-systemd-flag-243000
	I0701 12:49:31.691937    3367 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:49:31.691959    3367 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:49:31.691970    3367 cache.go:57] Caching tarball of preloaded images
	I0701 12:49:31.692023    3367 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:49:31.692028    3367 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:49:31.692084    3367 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/force-systemd-flag-243000/config.json ...
	I0701 12:49:31.692097    3367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/force-systemd-flag-243000/config.json: {Name:mk9df1e2d747e6e2786feee6278f6985c830a4a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:49:31.692292    3367 start.go:365] acquiring machines lock for force-systemd-flag-243000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:31.692324    3367 start.go:369] acquired machines lock for "force-systemd-flag-243000" in 23.959µs
	I0701 12:49:31.692336    3367 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-243000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:31.692371    3367 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:31.696906    3367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:31.712000    3367 start.go:159] libmachine.API.Create for "force-systemd-flag-243000" (driver="qemu2")
	I0701 12:49:31.712023    3367 client.go:168] LocalClient.Create starting
	I0701 12:49:31.712079    3367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:31.712100    3367 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:31.712113    3367 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:31.712162    3367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:31.712177    3367 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:31.712185    3367 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:31.712507    3367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:32.020359    3367 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:32.076745    3367 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:32.076750    3367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:32.076882    3367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:32.085248    3367 main.go:141] libmachine: STDOUT: 
	I0701 12:49:32.085261    3367 main.go:141] libmachine: STDERR: 
	I0701 12:49:32.085329    3367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2 +20000M
	I0701 12:49:32.092442    3367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:32.092461    3367 main.go:141] libmachine: STDERR: 
	I0701 12:49:32.092484    3367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:32.092491    3367 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:32.092527    3367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:bf:23:94:07:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:32.094091    3367 main.go:141] libmachine: STDOUT: 
	I0701 12:49:32.094106    3367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:32.094123    3367 client.go:171] LocalClient.Create took 382.104375ms
	I0701 12:49:34.096482    3367 start.go:128] duration metric: createHost completed in 2.404132792s
	I0701 12:49:34.096540    3367 start.go:83] releasing machines lock for "force-systemd-flag-243000", held for 2.404252291s
	W0701 12:49:34.096601    3367 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:34.114765    3367 out.go:177] * Deleting "force-systemd-flag-243000" in qemu2 ...
	W0701 12:49:34.129102    3367 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:34.129134    3367 start.go:687] Will try again in 5 seconds ...
	I0701 12:49:39.131286    3367 start.go:365] acquiring machines lock for force-systemd-flag-243000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:39.151865    3367 start.go:369] acquired machines lock for "force-systemd-flag-243000" in 20.464542ms
	I0701 12:49:39.152030    3367 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-243000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:39.152299    3367 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:39.160917    3367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:39.207384    3367 start.go:159] libmachine.API.Create for "force-systemd-flag-243000" (driver="qemu2")
	I0701 12:49:39.207427    3367 client.go:168] LocalClient.Create starting
	I0701 12:49:39.207550    3367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:39.207584    3367 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:39.207604    3367 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:39.207678    3367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:39.207705    3367 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:39.207719    3367 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:39.208257    3367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:39.422896    3367 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:39.526777    3367 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:39.526783    3367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:39.526931    3367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:39.535444    3367 main.go:141] libmachine: STDOUT: 
	I0701 12:49:39.535460    3367 main.go:141] libmachine: STDERR: 
	I0701 12:49:39.535520    3367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2 +20000M
	I0701 12:49:39.542579    3367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:39.542594    3367 main.go:141] libmachine: STDERR: 
	I0701 12:49:39.542610    3367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:39.542619    3367 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:39.542664    3367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c0:7c:0a:31:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-flag-243000/disk.qcow2
	I0701 12:49:39.544209    3367 main.go:141] libmachine: STDOUT: 
	I0701 12:49:39.544222    3367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:39.544235    3367 client.go:171] LocalClient.Create took 336.810542ms
	I0701 12:49:41.546466    3367 start.go:128] duration metric: createHost completed in 2.394166042s
	I0701 12:49:41.546530    3367 start.go:83] releasing machines lock for "force-systemd-flag-243000", held for 2.394686166s
	W0701 12:49:41.546957    3367 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-243000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-243000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:41.557534    3367 out.go:177] 
	W0701 12:49:41.562603    3367 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:49:41.562646    3367 out.go:239] * 
	* 
	W0701 12:49:41.565412    3367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:49:41.574518    3367 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-243000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-243000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-243000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (78.864208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-243000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-243000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-07-01 12:49:41.669163 -0700 PDT m=+896.741146501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-243000 -n force-systemd-flag-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-243000 -n force-systemd-flag-243000: exit status 7 (32.08125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-243000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-243000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-243000
--- FAIL: TestForceSystemdFlag (10.22s)

                                                
                                    
x
+
TestForceSystemdEnv (10.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.342365875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-232000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-232000 in cluster force-systemd-env-232000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-232000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:49:26.163088    3332 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:49:26.163276    3332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:26.163278    3332 out.go:309] Setting ErrFile to fd 2...
	I0701 12:49:26.163280    3332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:49:26.163356    3332 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:49:26.164410    3332 out.go:303] Setting JSON to false
	I0701 12:49:26.179480    3332 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1136,"bootTime":1688239830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:49:26.179552    3332 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:49:26.184564    3332 out.go:177] * [force-systemd-env-232000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:49:26.192634    3332 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:49:26.196543    3332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:49:26.192671    3332 notify.go:220] Checking for updates...
	I0701 12:49:26.202575    3332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:49:26.205526    3332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:49:26.208560    3332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:49:26.211651    3332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0701 12:49:26.214922    3332 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:49:26.214958    3332 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:49:26.219569    3332 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:49:26.226529    3332 start.go:297] selected driver: qemu2
	I0701 12:49:26.226535    3332 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:49:26.226541    3332 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:49:26.228453    3332 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:49:26.231614    3332 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:49:26.234670    3332 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:49:26.234689    3332 cni.go:84] Creating CNI manager for ""
	I0701 12:49:26.234697    3332 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:49:26.234701    3332 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:49:26.234714    3332 start_flags.go:319] config:
	{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-232000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:49:26.238779    3332 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:49:26.245541    3332 out.go:177] * Starting control plane node force-systemd-env-232000 in cluster force-systemd-env-232000
	I0701 12:49:26.249524    3332 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:49:26.249548    3332 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:49:26.249559    3332 cache.go:57] Caching tarball of preloaded images
	I0701 12:49:26.249613    3332 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:49:26.249619    3332 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:49:26.249679    3332 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/force-systemd-env-232000/config.json ...
	I0701 12:49:26.249690    3332 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/force-systemd-env-232000/config.json: {Name:mk2bc3db178751abf0b84e63795fe0b7ebc1ed06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:49:26.249889    3332 start.go:365] acquiring machines lock for force-systemd-env-232000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:26.249917    3332 start.go:369] acquired machines lock for "force-systemd-env-232000" in 22.25µs
	I0701 12:49:26.249931    3332 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-232000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:26.249956    3332 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:26.257596    3332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:26.272791    3332 start.go:159] libmachine.API.Create for "force-systemd-env-232000" (driver="qemu2")
	I0701 12:49:26.272815    3332 client.go:168] LocalClient.Create starting
	I0701 12:49:26.272871    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:26.272889    3332 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:26.272900    3332 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:26.272925    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:26.272939    3332 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:26.272945    3332 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:26.273233    3332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:26.537655    3332 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:26.643742    3332 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:26.643754    3332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:26.643911    3332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:26.652220    3332 main.go:141] libmachine: STDOUT: 
	I0701 12:49:26.652234    3332 main.go:141] libmachine: STDERR: 
	I0701 12:49:26.652281    3332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2 +20000M
	I0701 12:49:26.659461    3332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:26.659481    3332 main.go:141] libmachine: STDERR: 
	I0701 12:49:26.659507    3332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:26.659521    3332 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:26.659561    3332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:70:e2:2d:c8:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:26.661101    3332 main.go:141] libmachine: STDOUT: 
	I0701 12:49:26.661115    3332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:26.661133    3332 client.go:171] LocalClient.Create took 388.321291ms
	I0701 12:49:28.663190    3332 start.go:128] duration metric: createHost completed in 2.413271625s
	I0701 12:49:28.663210    3332 start.go:83] releasing machines lock for "force-systemd-env-232000", held for 2.413334416s
	W0701 12:49:28.663225    3332 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:28.667792    3332 out.go:177] * Deleting "force-systemd-env-232000" in qemu2 ...
	W0701 12:49:28.679382    3332 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:28.679388    3332 start.go:687] Will try again in 5 seconds ...
	I0701 12:49:33.681452    3332 start.go:365] acquiring machines lock for force-systemd-env-232000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:49:34.096685    3332 start.go:369] acquired machines lock for "force-systemd-env-232000" in 415.146292ms
	I0701 12:49:34.096840    3332 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-232000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:49:34.097174    3332 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:49:34.105810    3332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0701 12:49:34.152375    3332 start.go:159] libmachine.API.Create for "force-systemd-env-232000" (driver="qemu2")
	I0701 12:49:34.152424    3332 client.go:168] LocalClient.Create starting
	I0701 12:49:34.152569    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:49:34.152613    3332 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:34.152631    3332 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:34.152713    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:49:34.152744    3332 main.go:141] libmachine: Decoding PEM data...
	I0701 12:49:34.152758    3332 main.go:141] libmachine: Parsing certificate...
	I0701 12:49:34.153361    3332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:49:34.363190    3332 main.go:141] libmachine: Creating SSH key...
	I0701 12:49:34.419087    3332 main.go:141] libmachine: Creating Disk image...
	I0701 12:49:34.419093    3332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:49:34.419242    3332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:34.427792    3332 main.go:141] libmachine: STDOUT: 
	I0701 12:49:34.427807    3332 main.go:141] libmachine: STDERR: 
	I0701 12:49:34.427866    3332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2 +20000M
	I0701 12:49:34.434905    3332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:49:34.434918    3332 main.go:141] libmachine: STDERR: 
	I0701 12:49:34.434930    3332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:34.434936    3332 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:49:34.434985    3332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d5:e0:e0:c1:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/force-systemd-env-232000/disk.qcow2
	I0701 12:49:34.436529    3332 main.go:141] libmachine: STDOUT: 
	I0701 12:49:34.436542    3332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:49:34.436556    3332 client.go:171] LocalClient.Create took 284.127833ms
	I0701 12:49:36.438716    3332 start.go:128] duration metric: createHost completed in 2.341556625s
	I0701 12:49:36.438794    3332 start.go:83] releasing machines lock for "force-systemd-env-232000", held for 2.342106417s
	W0701 12:49:36.439220    3332 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:49:36.447694    3332 out.go:177] 
	W0701 12:49:36.452820    3332 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:49:36.452862    3332 out.go:239] * 
	* 
	W0701 12:49:36.455601    3332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:49:36.463719    3332 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-232000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.884334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-232000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-232000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-07-01 12:49:36.55565 -0700 PDT m=+891.627537543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-232000 -n force-systemd-env-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-232000 -n force-systemd-env-232000: exit status 7 (33.038875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-232000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-232000
--- FAIL: TestForceSystemdEnv (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-011000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-011000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-wvv4n" [82936aa7-b5e7-405b-a117-648d39ab168b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-wvv4n" [82936aa7-b5e7-405b-a117-648d39ab168b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011438125s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:30938
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:30938: Get "http://192.168.105.4:30938": dial tcp 192.168.105.4:30938: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-011000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-wvv4n
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-011000/192.168.105.4
Start Time:       Sat, 01 Jul 2023 12:38:57 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://6b0c53abf73470bd2e5ee4e4b15cced28bc016c86bb82aa7ce406412ef22a65e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 01 Jul 2023 12:39:18 -0700
Finished:     Sat, 01 Jul 2023 12:39:18 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pccdg (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-pccdg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-wvv4n to functional-011000
Normal   Pulling    31s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     26s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.41912873s (4.419137647s including waiting)
Normal   Created    10s (x3 over 26s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 26s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x5 over 25s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-wvv4n_default(82936aa7-b5e7-405b-a117-648d39ab168b)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-011000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-011000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.151.105
IPs:                      10.110.151.105
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30938/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-011000 -n functional-011000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| tunnel  | functional-011000 tunnel                                                                                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:38 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-011000 tunnel                                                                                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:38 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-011000 tunnel                                                                                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:38 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| addons  | functional-011000 addons list                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:38 PDT | 01 Jul 23 12:38 PDT |
	| addons  | functional-011000 addons list                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:38 PDT | 01 Jul 23 12:38 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-011000 service                                                                                            | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-011000 service list                                                                                       | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| service | functional-011000 service list                                                                                       | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-011000 service                                                                                            | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-011000                                                                                                    | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-011000 service                                                                                            | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| mount   | -p functional-011000                                                                                                 | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh -- ls                                                                                          | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh cat                                                                                            | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | /mount-9p/test-1688240358196898000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh stat                                                                                           | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh stat                                                                                           | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh sudo                                                                                           | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-011000                                                                                                 | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2838750774/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-011000 ssh findmnt                                                                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:38:05
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:38:05.437667    1896 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:38:05.437816    1896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:38:05.437818    1896 out.go:309] Setting ErrFile to fd 2...
	I0701 12:38:05.437820    1896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:38:05.437880    1896 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:38:05.438852    1896 out.go:303] Setting JSON to false
	I0701 12:38:05.454405    1896 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":455,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:38:05.454462    1896 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:38:05.456797    1896 out.go:177] * [functional-011000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:38:05.463715    1896 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:38:05.463782    1896 notify.go:220] Checking for updates...
	I0701 12:38:05.467595    1896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:38:05.470622    1896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:38:05.473687    1896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:38:05.476668    1896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:38:05.479659    1896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:38:05.482985    1896 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:38:05.483023    1896 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:38:05.486528    1896 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:38:05.493667    1896 start.go:297] selected driver: qemu2
	I0701 12:38:05.493671    1896 start.go:944] validating driver "qemu2" against &{Name:functional-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:38:05.493765    1896 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:38:05.495636    1896 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:38:05.495656    1896 cni.go:84] Creating CNI manager for ""
	I0701 12:38:05.495661    1896 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:38:05.495669    1896 start_flags.go:319] config:
	{Name:functional-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-011000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:38:05.499699    1896 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:38:05.506634    1896 out.go:177] * Starting control plane node functional-011000 in cluster functional-011000
	I0701 12:38:05.510616    1896 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:38:05.510637    1896 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:38:05.510646    1896 cache.go:57] Caching tarball of preloaded images
	I0701 12:38:05.510740    1896 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:38:05.510747    1896 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:38:05.510806    1896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/config.json ...
	I0701 12:38:05.511181    1896 start.go:365] acquiring machines lock for functional-011000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:38:05.511212    1896 start.go:369] acquired machines lock for "functional-011000" in 26.459µs
	I0701 12:38:05.511221    1896 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:38:05.511224    1896 fix.go:54] fixHost starting: 
	I0701 12:38:05.511918    1896 fix.go:102] recreateIfNeeded on functional-011000: state=Running err=<nil>
	W0701 12:38:05.511929    1896 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:38:05.519627    1896 out.go:177] * Updating the running qemu2 "functional-011000" VM ...
	I0701 12:38:05.523535    1896 machine.go:88] provisioning docker machine ...
	I0701 12:38:05.523542    1896 buildroot.go:166] provisioning hostname "functional-011000"
	I0701 12:38:05.523581    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:05.523817    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:05.523821    1896 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-011000 && echo "functional-011000" | sudo tee /etc/hostname
	I0701 12:38:05.602885    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-011000
	
	I0701 12:38:05.602941    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:05.603176    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:05.603184    1896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-011000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-011000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-011000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:38:05.672104    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:38:05.672111    1896 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1041/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1041/.minikube}
	I0701 12:38:05.672121    1896 buildroot.go:174] setting up certificates
	I0701 12:38:05.672127    1896 provision.go:83] configureAuth start
	I0701 12:38:05.672130    1896 provision.go:138] copyHostCerts
	I0701 12:38:05.672192    1896 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem, removing ...
	I0701 12:38:05.672197    1896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem
	I0701 12:38:05.672303    1896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem (1078 bytes)
	I0701 12:38:05.672474    1896 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem, removing ...
	I0701 12:38:05.672475    1896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem
	I0701 12:38:05.672511    1896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem (1123 bytes)
	I0701 12:38:05.672608    1896 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem, removing ...
	I0701 12:38:05.672610    1896 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem
	I0701 12:38:05.672730    1896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem (1679 bytes)
	I0701 12:38:05.672830    1896 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem org=jenkins.functional-011000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-011000]
	I0701 12:38:05.702134    1896 provision.go:172] copyRemoteCerts
	I0701 12:38:05.702182    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:38:05.702188    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:05.739133    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:38:05.746529    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0701 12:38:05.754255    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 12:38:05.761484    1896 provision.go:86] duration metric: configureAuth took 89.35325ms
	I0701 12:38:05.761490    1896 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:38:05.761598    1896 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:38:05.761634    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:05.761853    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:05.761856    1896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:38:05.830424    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:38:05.830430    1896 buildroot.go:70] root file system type: tmpfs
	I0701 12:38:05.830505    1896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:38:05.830554    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:05.830786    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:05.830819    1896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:38:05.905066    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:38:05.905107    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:05.905336    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:05.905343    1896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:38:05.976708    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:38:05.976714    1896 machine.go:91] provisioned docker machine in 453.185167ms
	I0701 12:38:05.976717    1896 start.go:300] post-start starting for "functional-011000" (driver="qemu2")
	I0701 12:38:05.976722    1896 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:38:05.976775    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:38:05.976782    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:06.015252    1896 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:38:06.016793    1896 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 12:38:06.016798    1896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/addons for local assets ...
	I0701 12:38:06.016858    1896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/files for local assets ...
	I0701 12:38:06.016963    1896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem -> 14612.pem in /etc/ssl/certs
	I0701 12:38:06.017067    1896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/test/nested/copy/1461/hosts -> hosts in /etc/test/nested/copy/1461
	I0701 12:38:06.017101    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1461
	I0701 12:38:06.019856    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:38:06.027553    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/test/nested/copy/1461/hosts --> /etc/test/nested/copy/1461/hosts (40 bytes)
	I0701 12:38:06.035016    1896 start.go:303] post-start completed in 58.29425ms
	I0701 12:38:06.035020    1896 fix.go:56] fixHost completed within 523.807458ms
	I0701 12:38:06.035063    1896 main.go:141] libmachine: Using SSH client type: native
	I0701 12:38:06.035293    1896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c49100] 0x100c4bb60 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0701 12:38:06.035296    1896 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:38:06.105924    1896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688240286.119298428
	
	I0701 12:38:06.105928    1896 fix.go:206] guest clock: 1688240286.119298428
	I0701 12:38:06.105931    1896 fix.go:219] Guest: 2023-07-01 12:38:06.119298428 -0700 PDT Remote: 2023-07-01 12:38:06.035021 -0700 PDT m=+0.616092834 (delta=84.277428ms)
	I0701 12:38:06.105943    1896 fix.go:190] guest clock delta is within tolerance: 84.277428ms
	I0701 12:38:06.105945    1896 start.go:83] releasing machines lock for "functional-011000", held for 594.742ms
	I0701 12:38:06.106236    1896 ssh_runner.go:195] Run: cat /version.json
	I0701 12:38:06.106242    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:06.106253    1896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:38:06.106270    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:06.181702    1896 ssh_runner.go:195] Run: systemctl --version
	I0701 12:38:06.183659    1896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:38:06.185321    1896 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:38:06.185348    1896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:38:06.188159    1896 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0701 12:38:06.188163    1896 start.go:466] detecting cgroup driver to use...
	I0701 12:38:06.188224    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:38:06.193515    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:38:06.196667    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:38:06.199645    1896 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:38:06.199668    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:38:06.202916    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:38:06.206457    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:38:06.210323    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:38:06.214060    1896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:38:06.217524    1896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:38:06.220511    1896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:38:06.223250    1896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:38:06.226761    1896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:38:06.299795    1896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:38:06.306367    1896 start.go:466] detecting cgroup driver to use...
	I0701 12:38:06.306424    1896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:38:06.312993    1896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:38:06.317634    1896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:38:06.334164    1896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:38:06.339393    1896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:38:06.344102    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:38:06.349740    1896 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:38:06.350987    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:38:06.354227    1896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:38:06.359944    1896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:38:06.444463    1896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:38:06.527218    1896 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:38:06.527227    1896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0701 12:38:06.532787    1896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:38:06.617202    1896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:38:17.985415    1896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.368410125s)
	I0701 12:38:17.985496    1896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:38:18.053735    1896 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:38:18.121582    1896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:38:18.189341    1896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:38:18.255613    1896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:38:18.263397    1896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:38:18.349870    1896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0701 12:38:18.377261    1896 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:38:18.377351    1896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:38:18.379365    1896 start.go:534] Will wait 60s for crictl version
	I0701 12:38:18.379387    1896 ssh_runner.go:195] Run: which crictl
	I0701 12:38:18.380704    1896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:38:18.396044    1896 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0701 12:38:18.396111    1896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:38:18.409253    1896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:38:18.424913    1896 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0701 12:38:18.425002    1896 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0701 12:38:18.431093    1896 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0701 12:38:18.434007    1896 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:38:18.434048    1896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:38:18.441233    1896 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-011000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0701 12:38:18.441242    1896 docker.go:566] Images already preloaded, skipping extraction
	I0701 12:38:18.441283    1896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:38:18.446905    1896 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-011000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0701 12:38:18.446912    1896 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:38:18.446956    1896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:38:18.454303    1896 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0701 12:38:18.454319    1896 cni.go:84] Creating CNI manager for ""
	I0701 12:38:18.454323    1896 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:38:18.454326    1896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 12:38:18.454334    1896 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-011000 NodeName:functional-011000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:38:18.454411    1896 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-011000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:38:18.454439    1896 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-011000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:functional-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0701 12:38:18.454495    1896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0701 12:38:18.458295    1896 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:38:18.458322    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 12:38:18.461552    1896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0701 12:38:18.466798    1896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:38:18.472436    1896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0701 12:38:18.477474    1896 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0701 12:38:18.478684    1896 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000 for IP: 192.168.105.4
	I0701 12:38:18.478691    1896 certs.go:190] acquiring lock for shared ca certs: {Name:mk0d2f6007eea276ce17a3a9c6aca904411113ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:38:18.478822    1896 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key
	I0701 12:38:18.478862    1896 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key
	I0701 12:38:18.478917    1896 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.key
	I0701 12:38:18.478960    1896 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/apiserver.key.942c473b
	I0701 12:38:18.479000    1896 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/proxy-client.key
	I0701 12:38:18.479141    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem (1338 bytes)
	W0701 12:38:18.479164    1896 certs.go:433] ignoring /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461_empty.pem, impossibly tiny 0 bytes
	I0701 12:38:18.479170    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 12:38:18.479188    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:38:18.479211    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:38:18.479228    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem (1679 bytes)
	I0701 12:38:18.479265    1896 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:38:18.479588    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 12:38:18.486750    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 12:38:18.494416    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:38:18.502358    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0701 12:38:18.509920    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:38:18.517786    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:38:18.525694    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:38:18.532755    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:38:18.539916    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem --> /usr/share/ca-certificates/1461.pem (1338 bytes)
	I0701 12:38:18.546913    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /usr/share/ca-certificates/14612.pem (1708 bytes)
	I0701 12:38:18.554521    1896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:38:18.562606    1896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:38:18.568078    1896 ssh_runner.go:195] Run: openssl version
	I0701 12:38:18.570045    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14612.pem && ln -fs /usr/share/ca-certificates/14612.pem /etc/ssl/certs/14612.pem"
	I0701 12:38:18.573406    1896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14612.pem
	I0701 12:38:18.575069    1896 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  1 19:36 /usr/share/ca-certificates/14612.pem
	I0701 12:38:18.575087    1896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14612.pem
	I0701 12:38:18.577017    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14612.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:38:18.579792    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:38:18.583166    1896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:38:18.584962    1896 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  1 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:38:18.584989    1896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:38:18.586929    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:38:18.590378    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1461.pem && ln -fs /usr/share/ca-certificates/1461.pem /etc/ssl/certs/1461.pem"
	I0701 12:38:18.593913    1896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1461.pem
	I0701 12:38:18.595490    1896 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  1 19:36 /usr/share/ca-certificates/1461.pem
	I0701 12:38:18.595509    1896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1461.pem
	I0701 12:38:18.597537    1896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1461.pem /etc/ssl/certs/51391683.0"
	I0701 12:38:18.600274    1896 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0701 12:38:18.601710    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 12:38:18.603478    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 12:38:18.605145    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 12:38:18.606906    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 12:38:18.608591    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 12:38:18.610294    1896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 12:38:18.612175    1896 kubeadm.go:404] StartCluster: {Name:functional-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.3 ClusterName:functional-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:38:18.612240    1896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:38:18.618032    1896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 12:38:18.621838    1896 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0701 12:38:18.621845    1896 kubeadm.go:636] restartCluster start
	I0701 12:38:18.621870    1896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 12:38:18.624976    1896 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:38:18.625278    1896 kubeconfig.go:92] found "functional-011000" server: "https://192.168.105.4:8441"
	I0701 12:38:18.626054    1896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 12:38:18.629264    1896 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0701 12:38:18.629267    1896 kubeadm.go:1128] stopping kube-system containers ...
	I0701 12:38:18.629300    1896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:38:18.636254    1896 docker.go:462] Stopping containers: [2d2a9ba66fd3 dd0af922a985 955e72383081 a20c9537a5fd c534359a1c21 fa15e19e6e80 ee3de8593401 c5662cbb8c45 e1b80b47f3a6 37a1c00baaf8 2b24d652ffd2 ee7a8f9dfe82 0f4fdb03c56e f32d274a74f4 0aae2492391a 644ce09d6815 a3efc70b6147 dea2e4b3fca4 36228073845b 6a8f6ba0fcc1 e4d54d973647 528089d8b965 f0bd016a967c 79e3c2777e21 abca3b3caabe f1971f32e5ab b9880fe0b089 fe645cfcecbe]
	I0701 12:38:18.636306    1896 ssh_runner.go:195] Run: docker stop 2d2a9ba66fd3 dd0af922a985 955e72383081 a20c9537a5fd c534359a1c21 fa15e19e6e80 ee3de8593401 c5662cbb8c45 e1b80b47f3a6 37a1c00baaf8 2b24d652ffd2 ee7a8f9dfe82 0f4fdb03c56e f32d274a74f4 0aae2492391a 644ce09d6815 a3efc70b6147 dea2e4b3fca4 36228073845b 6a8f6ba0fcc1 e4d54d973647 528089d8b965 f0bd016a967c 79e3c2777e21 abca3b3caabe f1971f32e5ab b9880fe0b089 fe645cfcecbe
	I0701 12:38:18.647554    1896 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0701 12:38:18.746866    1896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 12:38:18.751646    1896 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul  1 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul  1 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul  1 19:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul  1 19:36 /etc/kubernetes/scheduler.conf
	
	I0701 12:38:18.751680    1896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0701 12:38:18.755345    1896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0701 12:38:18.758811    1896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0701 12:38:18.762395    1896 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:38:18.762415    1896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 12:38:18.766105    1896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0701 12:38:18.769248    1896 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0701 12:38:18.769271    1896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 12:38:18.772087    1896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 12:38:18.775111    1896 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0701 12:38:18.775114    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:18.799151    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:19.414897    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:19.514271    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:19.556701    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:19.607747    1896 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:38:19.607806    1896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:38:20.115863    1896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:38:20.615870    1896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:38:20.620366    1896 api_server.go:72] duration metric: took 1.012633667s to wait for apiserver process to appear ...
	I0701 12:38:20.620372    1896 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:38:20.620382    1896 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0701 12:38:22.246319    1896 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 12:38:22.246327    1896 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 12:38:22.748364    1896 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0701 12:38:22.751635    1896 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0701 12:38:22.751642    1896 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0701 12:38:23.248386    1896 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0701 12:38:23.255008    1896 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0701 12:38:23.255017    1896 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0701 12:38:23.748453    1896 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0701 12:38:23.761430    1896 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0701 12:38:23.775958    1896 api_server.go:141] control plane version: v1.27.3
	I0701 12:38:23.775974    1896 api_server.go:131] duration metric: took 3.155657625s to wait for apiserver health ...
	I0701 12:38:23.775983    1896 cni.go:84] Creating CNI manager for ""
	I0701 12:38:23.775994    1896 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:38:23.780223    1896 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 12:38:23.784437    1896 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 12:38:23.795218    1896 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0701 12:38:23.813418    1896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:38:23.819622    1896 system_pods.go:59] 7 kube-system pods found
	I0701 12:38:23.819636    1896 system_pods.go:61] "coredns-5d78c9869d-j2pqz" [c4d3afd8-8856-4199-977e-a5b16323a9da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 12:38:23.819640    1896 system_pods.go:61] "etcd-functional-011000" [f709fedd-3081-42cc-9e23-2a223d3de125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0701 12:38:23.819644    1896 system_pods.go:61] "kube-apiserver-functional-011000" [ec9aec5e-81d5-414b-af8f-9812ae77621f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 12:38:23.819647    1896 system_pods.go:61] "kube-controller-manager-functional-011000" [fd63dbc4-d5b1-4566-a555-86f7b214470e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 12:38:23.819650    1896 system_pods.go:61] "kube-proxy-qfh78" [381b799e-1928-4ae7-944a-8d3711435db5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0701 12:38:23.819653    1896 system_pods.go:61] "kube-scheduler-functional-011000" [24ec50b7-defb-4349-8a85-9b43186d9c90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0701 12:38:23.819655    1896 system_pods.go:61] "storage-provisioner" [c6de41e7-048e-46f4-99d8-61d3c3eeaee5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0701 12:38:23.819658    1896 system_pods.go:74] duration metric: took 6.233667ms to wait for pod list to return data ...
	I0701 12:38:23.819662    1896 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:38:23.821822    1896 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0701 12:38:23.821830    1896 node_conditions.go:123] node cpu capacity is 2
	I0701 12:38:23.821835    1896 node_conditions.go:105] duration metric: took 2.171208ms to run NodePressure ...
	I0701 12:38:23.821842    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0701 12:38:23.950665    1896 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0701 12:38:23.953326    1896 kubeadm.go:787] kubelet initialised
	I0701 12:38:23.953330    1896 kubeadm.go:788] duration metric: took 2.65775ms waiting for restarted kubelet to initialise ...
	I0701 12:38:23.953333    1896 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:38:23.956704    1896 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:25.976514    1896 pod_ready.go:102] pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace has status "Ready":"False"
	I0701 12:38:28.476124    1896 pod_ready.go:102] pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace has status "Ready":"False"
	I0701 12:38:29.471341    1896 pod_ready.go:92] pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:29.471362    1896 pod_ready.go:81] duration metric: took 5.514752917s waiting for pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:29.471375    1896 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:31.494038    1896 pod_ready.go:102] pod "etcd-functional-011000" in "kube-system" namespace has status "Ready":"False"
	I0701 12:38:33.991781    1896 pod_ready.go:92] pod "etcd-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:33.991797    1896 pod_ready.go:81] duration metric: took 4.520499167s waiting for pod "etcd-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:33.991812    1896 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.013466    1896 pod_ready.go:102] pod "kube-apiserver-functional-011000" in "kube-system" namespace has status "Ready":"False"
	I0701 12:38:36.509089    1896 pod_ready.go:92] pod "kube-apiserver-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:36.509109    1896 pod_ready.go:81] duration metric: took 2.51733575s waiting for pod "kube-apiserver-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.509122    1896 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.517414    1896 pod_ready.go:92] pod "kube-controller-manager-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:36.517421    1896 pod_ready.go:81] duration metric: took 8.292291ms waiting for pod "kube-controller-manager-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.517430    1896 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qfh78" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.522697    1896 pod_ready.go:92] pod "kube-proxy-qfh78" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:36.522702    1896 pod_ready.go:81] duration metric: took 5.267583ms waiting for pod "kube-proxy-qfh78" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.522710    1896 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.527456    1896 pod_ready.go:92] pod "kube-scheduler-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:36.527460    1896 pod_ready.go:81] duration metric: took 4.746209ms waiting for pod "kube-scheduler-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.527468    1896 pod_ready.go:38] duration metric: took 12.574366834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:38:36.527488    1896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 12:38:36.536043    1896 ops.go:34] apiserver oom_adj: -16
	I0701 12:38:36.536049    1896 kubeadm.go:640] restartCluster took 17.914538917s
	I0701 12:38:36.536054    1896 kubeadm.go:406] StartCluster complete in 17.924224334s
	I0701 12:38:36.536067    1896 settings.go:142] acquiring lock: {Name:mk1853b69cc489034eba1c68e94bf3f8bc0ceb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:38:36.536237    1896 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:38:36.536823    1896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/kubeconfig: {Name:mk6d6ec6f258eefdfd78eed77d0a2eac619f380e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:38:36.537206    1896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 12:38:36.537260    1896 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0701 12:38:36.537325    1896 addons.go:66] Setting storage-provisioner=true in profile "functional-011000"
	I0701 12:38:36.537332    1896 addons.go:66] Setting default-storageclass=true in profile "functional-011000"
	I0701 12:38:36.537337    1896 addons.go:228] Setting addon storage-provisioner=true in "functional-011000"
	W0701 12:38:36.537341    1896 addons.go:237] addon storage-provisioner should already be in state true
	I0701 12:38:36.537352    1896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-011000"
	I0701 12:38:36.537382    1896 host.go:66] Checking if "functional-011000" exists ...
	I0701 12:38:36.537382    1896 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:38:36.544280    1896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:38:36.547464    1896 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 12:38:36.547469    1896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 12:38:36.547479    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:36.548085    1896 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-011000" context rescaled to 1 replicas
	I0701 12:38:36.548099    1896 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:38:36.552258    1896 out.go:177] * Verifying Kubernetes components...
	I0701 12:38:36.551004    1896 addons.go:228] Setting addon default-storageclass=true in "functional-011000"
	W0701 12:38:36.560254    1896 addons.go:237] addon default-storageclass should already be in state true
	I0701 12:38:36.560269    1896 host.go:66] Checking if "functional-011000" exists ...
	I0701 12:38:36.560322    1896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:38:36.561219    1896 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 12:38:36.561223    1896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 12:38:36.561229    1896 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
	I0701 12:38:36.590611    1896 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0701 12:38:36.590646    1896 node_ready.go:35] waiting up to 6m0s for node "functional-011000" to be "Ready" ...
	I0701 12:38:36.592073    1896 node_ready.go:49] node "functional-011000" has status "Ready":"True"
	I0701 12:38:36.592077    1896 node_ready.go:38] duration metric: took 1.426ms waiting for node "functional-011000" to be "Ready" ...
	I0701 12:38:36.592079    1896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:38:36.594707    1896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.599108    1896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 12:38:36.611825    1896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 12:38:36.903478    1896 pod_ready.go:92] pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:36.903484    1896 pod_ready.go:81] duration metric: took 308.778584ms waiting for pod "coredns-5d78c9869d-j2pqz" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.903487    1896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:36.955059    1896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 12:38:36.958837    1896 addons.go:499] enable addons completed in 421.611958ms: enabled=[storage-provisioner default-storageclass]
	I0701 12:38:37.306495    1896 pod_ready.go:92] pod "etcd-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:37.306517    1896 pod_ready.go:81] duration metric: took 403.031416ms waiting for pod "etcd-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:37.306535    1896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:37.708811    1896 pod_ready.go:92] pod "kube-apiserver-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:37.708844    1896 pod_ready.go:81] duration metric: took 402.302792ms waiting for pod "kube-apiserver-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:37.708866    1896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.110013    1896 pod_ready.go:92] pod "kube-controller-manager-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:38.110045    1896 pod_ready.go:81] duration metric: took 401.170459ms waiting for pod "kube-controller-manager-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.110072    1896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qfh78" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.510103    1896 pod_ready.go:92] pod "kube-proxy-qfh78" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:38.510129    1896 pod_ready.go:81] duration metric: took 400.051ms waiting for pod "kube-proxy-qfh78" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.510150    1896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.909814    1896 pod_ready.go:92] pod "kube-scheduler-functional-011000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:38:38.909844    1896 pod_ready.go:81] duration metric: took 399.686167ms waiting for pod "kube-scheduler-functional-011000" in "kube-system" namespace to be "Ready" ...
	I0701 12:38:38.909872    1896 pod_ready.go:38] duration metric: took 2.317825541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:38:38.909949    1896 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:38:38.910267    1896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:38:38.927642    1896 api_server.go:72] duration metric: took 2.379563125s to wait for apiserver process to appear ...
	I0701 12:38:38.927660    1896 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:38:38.927682    1896 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0701 12:38:38.937355    1896 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0701 12:38:38.938877    1896 api_server.go:141] control plane version: v1.27.3
	I0701 12:38:38.938886    1896 api_server.go:131] duration metric: took 11.22225ms to wait for apiserver health ...
	I0701 12:38:38.938898    1896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:38:39.119293    1896 system_pods.go:59] 7 kube-system pods found
	I0701 12:38:39.119319    1896 system_pods.go:61] "coredns-5d78c9869d-j2pqz" [c4d3afd8-8856-4199-977e-a5b16323a9da] Running
	I0701 12:38:39.119325    1896 system_pods.go:61] "etcd-functional-011000" [f709fedd-3081-42cc-9e23-2a223d3de125] Running
	I0701 12:38:39.119331    1896 system_pods.go:61] "kube-apiserver-functional-011000" [ec9aec5e-81d5-414b-af8f-9812ae77621f] Running
	I0701 12:38:39.119338    1896 system_pods.go:61] "kube-controller-manager-functional-011000" [fd63dbc4-d5b1-4566-a555-86f7b214470e] Running
	I0701 12:38:39.119342    1896 system_pods.go:61] "kube-proxy-qfh78" [381b799e-1928-4ae7-944a-8d3711435db5] Running
	I0701 12:38:39.119348    1896 system_pods.go:61] "kube-scheduler-functional-011000" [24ec50b7-defb-4349-8a85-9b43186d9c90] Running
	I0701 12:38:39.119353    1896 system_pods.go:61] "storage-provisioner" [c6de41e7-048e-46f4-99d8-61d3c3eeaee5] Running
	I0701 12:38:39.119359    1896 system_pods.go:74] duration metric: took 180.460292ms to wait for pod list to return data ...
	I0701 12:38:39.119371    1896 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:38:39.308902    1896 default_sa.go:45] found service account: "default"
	I0701 12:38:39.308925    1896 default_sa.go:55] duration metric: took 189.550584ms for default service account to be created ...
	I0701 12:38:39.308940    1896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:38:39.515057    1896 system_pods.go:86] 7 kube-system pods found
	I0701 12:38:39.515080    1896 system_pods.go:89] "coredns-5d78c9869d-j2pqz" [c4d3afd8-8856-4199-977e-a5b16323a9da] Running
	I0701 12:38:39.515089    1896 system_pods.go:89] "etcd-functional-011000" [f709fedd-3081-42cc-9e23-2a223d3de125] Running
	I0701 12:38:39.515135    1896 system_pods.go:89] "kube-apiserver-functional-011000" [ec9aec5e-81d5-414b-af8f-9812ae77621f] Running
	I0701 12:38:39.515143    1896 system_pods.go:89] "kube-controller-manager-functional-011000" [fd63dbc4-d5b1-4566-a555-86f7b214470e] Running
	I0701 12:38:39.515151    1896 system_pods.go:89] "kube-proxy-qfh78" [381b799e-1928-4ae7-944a-8d3711435db5] Running
	I0701 12:38:39.515159    1896 system_pods.go:89] "kube-scheduler-functional-011000" [24ec50b7-defb-4349-8a85-9b43186d9c90] Running
	I0701 12:38:39.515165    1896 system_pods.go:89] "storage-provisioner" [c6de41e7-048e-46f4-99d8-61d3c3eeaee5] Running
	I0701 12:38:39.515180    1896 system_pods.go:126] duration metric: took 206.236083ms to wait for k8s-apps to be running ...
	I0701 12:38:39.515193    1896 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:38:39.515428    1896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:38:39.532254    1896 system_svc.go:56] duration metric: took 17.063125ms WaitForService to wait for kubelet.
	I0701 12:38:39.532265    1896 kubeadm.go:581] duration metric: took 2.98420825s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0701 12:38:39.532285    1896 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:38:39.705920    1896 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0701 12:38:39.705932    1896 node_conditions.go:123] node cpu capacity is 2
	I0701 12:38:39.705941    1896 node_conditions.go:105] duration metric: took 173.655958ms to run NodePressure ...
	I0701 12:38:39.705950    1896 start.go:228] waiting for startup goroutines ...
	I0701 12:38:39.705957    1896 start.go:233] waiting for cluster config update ...
	I0701 12:38:39.705966    1896 start.go:242] writing updated cluster config ...
	I0701 12:38:39.706556    1896 ssh_runner.go:195] Run: rm -f paused
	I0701 12:38:39.750667    1896 start.go:642] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0701 12:38:39.753644    1896 out.go:177] * Done! kubectl is now configured to use "functional-011000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-07-01 19:36:45 UTC, ends at Sat 2023-07-01 19:39:29 UTC. --
	Jul 01 19:39:19 functional-011000 dockerd[6744]: time="2023-07-01T19:39:19.455493187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:19 functional-011000 dockerd[6744]: time="2023-07-01T19:39:19.455617019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:39:19 functional-011000 dockerd[6744]: time="2023-07-01T19:39:19.455626519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:19 functional-011000 cri-dockerd[7014]: time="2023-07-01T19:39:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ecd128eee1070d3d77c8ef7e24e499aa699fd0693140fc30ab4e76a05fc73f2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 01 19:39:21 functional-011000 cri-dockerd[7014]: time="2023-07-01T19:39:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.426156438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.426264104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.426284312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.426289020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:21 functional-011000 dockerd[6738]: time="2023-07-01T19:39:21.481001009Z" level=info msg="ignoring event" container=d4f67b4799331c12042118d00f770fd92cd9f9959520a148e250bdae84d8f8aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.481093466Z" level=info msg="shim disconnected" id=d4f67b4799331c12042118d00f770fd92cd9f9959520a148e250bdae84d8f8aa namespace=moby
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.481116716Z" level=warning msg="cleaning up after shim disconnected" id=d4f67b4799331c12042118d00f770fd92cd9f9959520a148e250bdae84d8f8aa namespace=moby
	Jul 01 19:39:21 functional-011000 dockerd[6744]: time="2023-07-01T19:39:21.481120258Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:39:22 functional-011000 dockerd[6738]: time="2023-07-01T19:39:22.833290303Z" level=info msg="ignoring event" container=5ecd128eee1070d3d77c8ef7e24e499aa699fd0693140fc30ab4e76a05fc73f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:39:22 functional-011000 dockerd[6744]: time="2023-07-01T19:39:22.834523622Z" level=info msg="shim disconnected" id=5ecd128eee1070d3d77c8ef7e24e499aa699fd0693140fc30ab4e76a05fc73f2 namespace=moby
	Jul 01 19:39:22 functional-011000 dockerd[6744]: time="2023-07-01T19:39:22.834578872Z" level=warning msg="cleaning up after shim disconnected" id=5ecd128eee1070d3d77c8ef7e24e499aa699fd0693140fc30ab4e76a05fc73f2 namespace=moby
	Jul 01 19:39:22 functional-011000 dockerd[6744]: time="2023-07-01T19:39:22.834584580Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.654351419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.654403252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.654954371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.654972037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:39:24 functional-011000 dockerd[6738]: time="2023-07-01T19:39:24.701123163Z" level=info msg="ignoring event" container=b03df2b02807c41e4c85bd662edbd0044d595995e407e239c5773540c5c799c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.701712157Z" level=info msg="shim disconnected" id=b03df2b02807c41e4c85bd662edbd0044d595995e407e239c5773540c5c799c7 namespace=moby
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.701746865Z" level=warning msg="cleaning up after shim disconnected" id=b03df2b02807c41e4c85bd662edbd0044d595995e407e239c5773540c5c799c7 namespace=moby
	Jul 01 19:39:24 functional-011000 dockerd[6744]: time="2023-07-01T19:39:24.701751240Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	b03df2b02807c       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            2                   85706be8275b3
	d4f67b4799331       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   5ecd128eee107
	6b0c53abf7347       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   952441f11534a
	79bd461b68fb9       nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247                         25 seconds ago       Running             myfrontend                0                   22f9d06edfd08
	973ba2c09e898       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                         40 seconds ago       Running             nginx                     0                   9e61590d8336b
	51bfd83d003f7       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   fc098e938f483
	b7679d5b7924e       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   7bd7725687c3f
	220d9e6f59abb       fb73e92641fd5                                                                                         About a minute ago   Running             kube-proxy                2                   49beefc54daf5
	c16c436f8f9bb       39dfb036b0986                                                                                         About a minute ago   Running             kube-apiserver            0                   d11a9d5a5b5af
	6cf704c51aaeb       bcb9e554eaab6                                                                                         About a minute ago   Running             kube-scheduler            2                   cdb83b1b700cb
	f04746a7d03ad       ab3683b584ae5                                                                                         About a minute ago   Running             kube-controller-manager   2                   628748b24a224
	34acd157a78b6       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   ab65510268581
	2d2a9ba66fd31       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   0f4fdb03c56e2
	dd0af922a985a       fb73e92641fd5                                                                                         About a minute ago   Exited              kube-proxy                1                   ee7a8f9dfe82a
	955e72383081a       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   c5662cbb8c45e
	a20c9537a5fd9       24bc64e911039                                                                                         About a minute ago   Exited              etcd                      1                   2b24d652ffd21
	c534359a1c219       ab3683b584ae5                                                                                         About a minute ago   Exited              kube-controller-manager   1                   37a1c00baaf85
	fa15e19e6e80d       bcb9e554eaab6                                                                                         About a minute ago   Exited              kube-scheduler            1                   e1b80b47f3a62
	
	* 
	* ==> coredns [51bfd83d003f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43090 - 9108 "HINFO IN 3016052196517135100.8555654028234789168. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004168574s
	[INFO] 10.244.0.1:52174 - 31346 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000146039s
	[INFO] 10.244.0.1:5213 - 44746 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00009704s
	[INFO] 10.244.0.1:51599 - 48580 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00004879s
	[INFO] 10.244.0.1:40466 - 22876 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001019519s
	[INFO] 10.244.0.1:64669 - 27629 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00005904s
	[INFO] 10.244.0.1:36785 - 40075 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000237036s
	
	* 
	* ==> coredns [955e72383081] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39219 - 35371 "HINFO IN 3702910170871863307.6288367798891933497. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004200959s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-011000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-011000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2
	                    minikube.k8s.io/name=functional-011000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_01T12_37_02_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Jul 2023 19:36:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-011000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Jul 2023 19:39:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Jul 2023 19:39:23 +0000   Sat, 01 Jul 2023 19:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Jul 2023 19:39:23 +0000   Sat, 01 Jul 2023 19:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Jul 2023 19:39:23 +0000   Sat, 01 Jul 2023 19:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Jul 2023 19:39:23 +0000   Sat, 01 Jul 2023 19:37:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-011000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 8760f467df7d4792929ff6e64258e710
	  System UUID:                8760f467df7d4792929ff6e64258e710
	  Boot ID:                    0ea3de3b-1887-4591-9973-4b408990c54d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-lqbz6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     hello-node-connect-58d66798bb-wvv4n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 coredns-5d78c9869d-j2pqz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m14s
	  kube-system                 etcd-functional-011000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-functional-011000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-functional-011000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-proxy-qfh78                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-functional-011000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m12s              kube-proxy       
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 2m27s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m27s              kubelet          Node functional-011000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s              kubelet          Node functional-011000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s              kubelet          Node functional-011000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m24s              kubelet          Node functional-011000 status is now: NodeReady
	  Normal  RegisteredNode           2m15s              node-controller  Node functional-011000 event: Registered Node functional-011000 in Controller
	  Normal  RegisteredNode           99s                node-controller  Node functional-011000 event: Registered Node functional-011000 in Controller
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node functional-011000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node functional-011000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)  kubelet          Node functional-011000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                node-controller  Node functional-011000 event: Registered Node functional-011000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.093908] systemd-fstab-generator[3814]: Ignoring "noauto" for root device
	[  +0.101421] systemd-fstab-generator[3827]: Ignoring "noauto" for root device
	[  +1.515557] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.860350] systemd-fstab-generator[4453]: Ignoring "noauto" for root device
	[  +0.075699] systemd-fstab-generator[4464]: Ignoring "noauto" for root device
	[  +0.067348] systemd-fstab-generator[4475]: Ignoring "noauto" for root device
	[  +0.068212] systemd-fstab-generator[4486]: Ignoring "noauto" for root device
	[  +0.097123] systemd-fstab-generator[4560]: Ignoring "noauto" for root device
	[  +4.674297] kauditd_printk_skb: 34 callbacks suppressed
	[Jul 1 19:38] systemd-fstab-generator[6284]: Ignoring "noauto" for root device
	[  +0.140491] systemd-fstab-generator[6316]: Ignoring "noauto" for root device
	[  +0.083210] systemd-fstab-generator[6327]: Ignoring "noauto" for root device
	[  +0.091319] systemd-fstab-generator[6340]: Ignoring "noauto" for root device
	[ +11.451896] systemd-fstab-generator[6903]: Ignoring "noauto" for root device
	[  +0.066973] systemd-fstab-generator[6914]: Ignoring "noauto" for root device
	[  +0.067551] systemd-fstab-generator[6925]: Ignoring "noauto" for root device
	[  +0.066378] systemd-fstab-generator[6936]: Ignoring "noauto" for root device
	[  +0.094580] systemd-fstab-generator[7007]: Ignoring "noauto" for root device
	[  +1.160418] systemd-fstab-generator[7251]: Ignoring "noauto" for root device
	[  +3.555012] kauditd_printk_skb: 29 callbacks suppressed
	[ +23.274182] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.980024] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.734756] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jul 1 19:39] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.843624] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [34acd157a78b] <==
	* {"level":"info","ts":"2023-07-01T19:38:20.705Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:38:20.707Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"7520ddf439b1d16","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-07-01T19:38:20.707Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:38:20.707Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:38:20.707Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:38:20.713Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-01T19:38:20.713Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-01T19:38:20.713Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-01T19:38:20.714Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-01T19:38:20.714Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-01T19:38:21.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-07-01T19:38:21.642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-01T19:38:21.643Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-011000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-01T19:38:21.643Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:38:21.644Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-01T19:38:21.644Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:38:21.644Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-07-01T19:38:21.652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-01T19:38:21.652Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-01T19:38:55.579Z","caller":"traceutil/trace.go:171","msg":"trace[518753910] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"136.117806ms","start":"2023-07-01T19:38:55.443Z","end":"2023-07-01T19:38:55.579Z","steps":["trace[518753910] 'process raft request'  (duration: 136.023558ms)"],"step_count":1}
	
	* 
	* ==> etcd [a20c9537a5fd] <==
	* {"level":"info","ts":"2023-07-01T19:37:35.980Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-01T19:37:35.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-07-01T19:37:35.980Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-07-01T19:37:35.980Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:37:35.980Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-07-01T19:37:37.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-01T19:37:37.071Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-011000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-01T19:37:37.071Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:37:37.072Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-07-01T19:37:37.072Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:37:37.073Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-01T19:37:37.073Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-01T19:37:37.073Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-01T19:38:06.644Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-01T19:38:06.644Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-011000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-07-01T19:38:06.657Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-07-01T19:38:06.658Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-01T19:38:06.659Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-01T19:38:06.659Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-011000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  19:39:29 up 2 min,  0 users,  load average: 0.41, 0.25, 0.10
	Linux functional-011000 5.10.57 #1 SMP PREEMPT Thu Jun 22 18:49:06 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c16c436f8f9b] <==
	* I0701 19:38:22.340411       1 shared_informer.go:318] Caches are synced for configmaps
	I0701 19:38:22.340489       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0701 19:38:22.340498       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0701 19:38:22.340560       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 19:38:22.340744       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0701 19:38:22.340946       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 19:38:22.341080       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0701 19:38:22.341166       1 aggregator.go:152] initial CRD sync complete...
	I0701 19:38:22.341177       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 19:38:22.341179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 19:38:22.341182       1 cache.go:39] Caches are synced for autoregister controller
	I0701 19:38:23.114000       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 19:38:23.249155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 19:38:23.910795       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0701 19:38:23.917524       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0701 19:38:23.936253       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0701 19:38:23.953299       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 19:38:23.960341       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 19:38:34.609182       1 controller.go:624] quota admission added evaluator for: endpoints
	I0701 19:38:34.755269       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 19:38:41.272937       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.98.242.247]
	I0701 19:38:45.855848       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.96.137.4]
	I0701 19:38:57.243587       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0701 19:38:57.286973       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.110.151.105]
	I0701 19:39:10.719057       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.107.235.41]
	
	* 
	* ==> kube-controller-manager [c534359a1c21] <==
	* I0701 19:37:50.497815       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0701 19:37:50.497818       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0701 19:37:50.499821       1 shared_informer.go:318] Caches are synced for deployment
	I0701 19:37:50.502169       1 shared_informer.go:318] Caches are synced for taint
	I0701 19:37:50.502197       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0701 19:37:50.502224       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-011000"
	I0701 19:37:50.502241       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0701 19:37:50.502288       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0701 19:37:50.502322       1 taint_manager.go:211] "Sending events to api server"
	I0701 19:37:50.502367       1 event.go:307] "Event occurred" object="functional-011000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-011000 event: Registered Node functional-011000 in Controller"
	I0701 19:37:50.506126       1 shared_informer.go:318] Caches are synced for PV protection
	I0701 19:37:50.513063       1 shared_informer.go:318] Caches are synced for ephemeral
	I0701 19:37:50.513083       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0701 19:37:50.513109       1 shared_informer.go:318] Caches are synced for job
	I0701 19:37:50.513129       1 shared_informer.go:318] Caches are synced for disruption
	I0701 19:37:50.513159       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0701 19:37:50.521041       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0701 19:37:50.596828       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0701 19:37:50.613977       1 shared_informer.go:318] Caches are synced for cronjob
	I0701 19:37:50.615323       1 shared_informer.go:318] Caches are synced for resource quota
	I0701 19:37:50.618146       1 shared_informer.go:318] Caches are synced for resource quota
	I0701 19:37:50.630501       1 shared_informer.go:318] Caches are synced for persistent volume
	I0701 19:37:51.035346       1 shared_informer.go:318] Caches are synced for garbage collector
	I0701 19:37:51.035374       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0701 19:37:51.035497       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [f04746a7d03a] <==
	* I0701 19:38:34.535834       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0701 19:38:34.535905       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-011000"
	I0701 19:38:34.535933       1 event.go:307] "Event occurred" object="functional-011000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-011000 event: Registered Node functional-011000 in Controller"
	I0701 19:38:34.535939       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0701 19:38:34.535840       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0701 19:38:34.536025       1 taint_manager.go:211] "Sending events to api server"
	I0701 19:38:34.546684       1 shared_informer.go:318] Caches are synced for crt configmap
	I0701 19:38:34.548843       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0701 19:38:34.551011       1 shared_informer.go:318] Caches are synced for daemon sets
	I0701 19:38:34.551895       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0701 19:38:34.552929       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0701 19:38:34.552990       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0701 19:38:34.554091       1 shared_informer.go:318] Caches are synced for PVC protection
	I0701 19:38:34.555205       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0701 19:38:34.557226       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0701 19:38:34.692193       1 shared_informer.go:318] Caches are synced for resource quota
	I0701 19:38:34.756151       1 shared_informer.go:318] Caches are synced for resource quota
	I0701 19:38:35.068647       1 shared_informer.go:318] Caches are synced for garbage collector
	I0701 19:38:35.068680       1 shared_informer.go:318] Caches are synced for garbage collector
	I0701 19:38:35.068693       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0701 19:38:50.696769       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0701 19:38:57.245589       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0701 19:38:57.253266       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-wvv4n"
	I0701 19:39:10.677754       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0701 19:39:10.680333       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-lqbz6"
	
	* 
	* ==> kube-proxy [220d9e6f59ab] <==
	* I0701 19:38:24.006468       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0701 19:38:24.006582       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0701 19:38:24.006610       1 server_others.go:554] "Using iptables proxy"
	I0701 19:38:24.017006       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0701 19:38:24.017016       1 server_others.go:192] "Using iptables Proxier"
	I0701 19:38:24.017050       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 19:38:24.017236       1 server.go:658] "Version info" version="v1.27.3"
	I0701 19:38:24.017241       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 19:38:24.017519       1 config.go:188] "Starting service config controller"
	I0701 19:38:24.017525       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0701 19:38:24.017533       1 config.go:97] "Starting endpoint slice config controller"
	I0701 19:38:24.017534       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0701 19:38:24.017888       1 config.go:315] "Starting node config controller"
	I0701 19:38:24.017892       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0701 19:38:24.117857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0701 19:38:24.117878       1 shared_informer.go:318] Caches are synced for service config
	I0701 19:38:24.117980       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [dd0af922a985] <==
	* I0701 19:37:37.761670       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0701 19:37:37.765502       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0701 19:37:37.765576       1 server_others.go:554] "Using iptables proxy"
	I0701 19:37:37.776528       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0701 19:37:37.776540       1 server_others.go:192] "Using iptables Proxier"
	I0701 19:37:37.776556       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 19:37:37.776748       1 server.go:658] "Version info" version="v1.27.3"
	I0701 19:37:37.776756       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 19:37:37.777204       1 config.go:188] "Starting service config controller"
	I0701 19:37:37.777209       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0701 19:37:37.777216       1 config.go:97] "Starting endpoint slice config controller"
	I0701 19:37:37.777217       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0701 19:37:37.777336       1 config.go:315] "Starting node config controller"
	I0701 19:37:37.777338       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0701 19:37:37.878219       1 shared_informer.go:318] Caches are synced for node config
	I0701 19:37:37.878240       1 shared_informer.go:318] Caches are synced for service config
	I0701 19:37:37.878253       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [6cf704c51aae] <==
	* I0701 19:38:21.283193       1 serving.go:348] Generated self-signed cert in-memory
	W0701 19:38:22.271121       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0701 19:38:22.271230       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 19:38:22.271255       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0701 19:38:22.271273       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0701 19:38:22.307074       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0701 19:38:22.307131       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 19:38:22.308736       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0701 19:38:22.309156       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 19:38:22.309187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0701 19:38:22.309962       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 19:38:22.410328       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa15e19e6e80] <==
	* I0701 19:37:36.181293       1 serving.go:348] Generated self-signed cert in-memory
	I0701 19:37:37.753933       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0701 19:37:37.753948       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 19:37:37.756255       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0701 19:37:37.756267       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0701 19:37:37.756279       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0701 19:37:37.756282       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 19:37:37.756287       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0701 19:37:37.756292       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0701 19:37:37.757003       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0701 19:37:37.758482       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 19:37:37.857310       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0701 19:37:37.857340       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0701 19:37:37.857414       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 19:38:06.651081       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0701 19:38:06.651101       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0701 19:38:06.651168       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0701 19:38:06.651186       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-07-01 19:36:45 UTC, ends at Sat 2023-07-01 19:39:29 UTC. --
	Jul 01 19:39:19 functional-011000 kubelet[7257]: I0701 19:39:19.193799    7257 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrwh\" (UniqueName: \"kubernetes.io/projected/09a9d85b-3d5c-41b1-ab27-70339046efaa-kube-api-access-jxrwh\") pod \"busybox-mount\" (UID: \"09a9d85b-3d5c-41b1-ab27-70339046efaa\") " pod="default/busybox-mount"
	Jul 01 19:39:19 functional-011000 kubelet[7257]: E0701 19:39:19.610750    7257 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 01 19:39:19 functional-011000 kubelet[7257]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 01 19:39:19 functional-011000 kubelet[7257]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 01 19:39:19 functional-011000 kubelet[7257]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 01 19:39:19 functional-011000 kubelet[7257]: I0701 19:39:19.676884    7257 scope.go:115] "RemoveContainer" containerID="ee3de8593401b7f49ab49e4b3d6d97e28035231a21086fd8973e0041f6f9af1e"
	Jul 01 19:39:19 functional-011000 kubelet[7257]: I0701 19:39:19.682986    7257 scope.go:115] "RemoveContainer" containerID="323b7eb6577a5d187a0c30d88daa1b3c134a87333ee6e9ac6940c7d927b20420"
	Jul 01 19:39:19 functional-011000 kubelet[7257]: I0701 19:39:19.693189    7257 scope.go:115] "RemoveContainer" containerID="6b0c53abf73470bd2e5ee4e4b15cced28bc016c86bb82aa7ce406412ef22a65e"
	Jul 01 19:39:19 functional-011000 kubelet[7257]: E0701 19:39:19.693279    7257 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-wvv4n_default(82936aa7-b5e7-405b-a117-648d39ab168b)\"" pod="default/hello-node-connect-58d66798bb-wvv4n" podUID=82936aa7-b5e7-405b-a117-648d39ab168b
	Jul 01 19:39:20 functional-011000 kubelet[7257]: I0701 19:39:20.724772    7257 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323b7eb6577a5d187a0c30d88daa1b3c134a87333ee6e9ac6940c7d927b20420"
	Jul 01 19:39:20 functional-011000 kubelet[7257]: I0701 19:39:20.725094    7257 scope.go:115] "RemoveContainer" containerID="6b0c53abf73470bd2e5ee4e4b15cced28bc016c86bb82aa7ce406412ef22a65e"
	Jul 01 19:39:20 functional-011000 kubelet[7257]: E0701 19:39:20.725282    7257 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-wvv4n_default(82936aa7-b5e7-405b-a117-648d39ab168b)\"" pod="default/hello-node-connect-58d66798bb-wvv4n" podUID=82936aa7-b5e7-405b-a117-648d39ab168b
	Jul 01 19:39:21 functional-011000 kubelet[7257]: I0701 19:39:21.741625    7257 scope.go:115] "RemoveContainer" containerID="6b0c53abf73470bd2e5ee4e4b15cced28bc016c86bb82aa7ce406412ef22a65e"
	Jul 01 19:39:21 functional-011000 kubelet[7257]: E0701 19:39:21.742257    7257 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-wvv4n_default(82936aa7-b5e7-405b-a117-648d39ab168b)\"" pod="default/hello-node-connect-58d66798bb-wvv4n" podUID=82936aa7-b5e7-405b-a117-648d39ab168b
	Jul 01 19:39:22 functional-011000 kubelet[7257]: I0701 19:39:22.931680    7257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/09a9d85b-3d5c-41b1-ab27-70339046efaa-test-volume\") pod \"09a9d85b-3d5c-41b1-ab27-70339046efaa\" (UID: \"09a9d85b-3d5c-41b1-ab27-70339046efaa\") "
	Jul 01 19:39:22 functional-011000 kubelet[7257]: I0701 19:39:22.931748    7257 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09a9d85b-3d5c-41b1-ab27-70339046efaa-test-volume" (OuterVolumeSpecName: "test-volume") pod "09a9d85b-3d5c-41b1-ab27-70339046efaa" (UID: "09a9d85b-3d5c-41b1-ab27-70339046efaa"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 01 19:39:22 functional-011000 kubelet[7257]: I0701 19:39:22.931976    7257 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrwh\" (UniqueName: \"kubernetes.io/projected/09a9d85b-3d5c-41b1-ab27-70339046efaa-kube-api-access-jxrwh\") pod \"09a9d85b-3d5c-41b1-ab27-70339046efaa\" (UID: \"09a9d85b-3d5c-41b1-ab27-70339046efaa\") "
	Jul 01 19:39:22 functional-011000 kubelet[7257]: I0701 19:39:22.932002    7257 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/09a9d85b-3d5c-41b1-ab27-70339046efaa-test-volume\") on node \"functional-011000\" DevicePath \"\""
	Jul 01 19:39:22 functional-011000 kubelet[7257]: I0701 19:39:22.934583    7257 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a9d85b-3d5c-41b1-ab27-70339046efaa-kube-api-access-jxrwh" (OuterVolumeSpecName: "kube-api-access-jxrwh") pod "09a9d85b-3d5c-41b1-ab27-70339046efaa" (UID: "09a9d85b-3d5c-41b1-ab27-70339046efaa"). InnerVolumeSpecName "kube-api-access-jxrwh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 01 19:39:23 functional-011000 kubelet[7257]: I0701 19:39:23.033074    7257 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jxrwh\" (UniqueName: \"kubernetes.io/projected/09a9d85b-3d5c-41b1-ab27-70339046efaa-kube-api-access-jxrwh\") on node \"functional-011000\" DevicePath \"\""
	Jul 01 19:39:23 functional-011000 kubelet[7257]: I0701 19:39:23.767205    7257 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ecd128eee1070d3d77c8ef7e24e499aa699fd0693140fc30ab4e76a05fc73f2"
	Jul 01 19:39:24 functional-011000 kubelet[7257]: I0701 19:39:24.585827    7257 scope.go:115] "RemoveContainer" containerID="62abd42b16c858a5c9bc222112443dfa1abc5ef9cc55c6f83fce45d3158aea88"
	Jul 01 19:39:24 functional-011000 kubelet[7257]: I0701 19:39:24.774799    7257 scope.go:115] "RemoveContainer" containerID="62abd42b16c858a5c9bc222112443dfa1abc5ef9cc55c6f83fce45d3158aea88"
	Jul 01 19:39:24 functional-011000 kubelet[7257]: I0701 19:39:24.774952    7257 scope.go:115] "RemoveContainer" containerID="b03df2b02807c41e4c85bd662edbd0044d595995e407e239c5773540c5c799c7"
	Jul 01 19:39:24 functional-011000 kubelet[7257]: E0701 19:39:24.775039    7257 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-lqbz6_default(0388c9f1-c13a-4397-a37e-1e0d13744034)\"" pod="default/hello-node-7b684b55f9-lqbz6" podUID=0388c9f1-c13a-4397-a37e-1e0d13744034
	
	* 
	* ==> storage-provisioner [2d2a9ba66fd3] <==
	* I0701 19:37:36.049004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 19:37:37.765028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 19:37:37.765122       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 19:37:55.187039       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 19:37:55.187466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-011000_0e3efc00-989a-412e-9d64-31b7e0b7d9e5!
	I0701 19:37:55.195869       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3d75f5f-26d6-47eb-92a4-a14f9be88f53", APIVersion:"v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-011000_0e3efc00-989a-412e-9d64-31b7e0b7d9e5 became leader
	I0701 19:37:55.290327       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-011000_0e3efc00-989a-412e-9d64-31b7e0b7d9e5!
	
	* 
	* ==> storage-provisioner [b7679d5b7924] <==
	* I0701 19:38:23.988688       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 19:38:23.993540       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 19:38:23.993607       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 19:38:41.398434       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 19:38:41.398585       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-011000_78f6c7b7-8e94-46dc-9b11-e7136266787f!
	I0701 19:38:41.398832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3d75f5f-26d6-47eb-92a4-a14f9be88f53", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-011000_78f6c7b7-8e94-46dc-9b11-e7136266787f became leader
	I0701 19:38:41.498822       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-011000_78f6c7b7-8e94-46dc-9b11-e7136266787f!
	I0701 19:38:50.697003       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0701 19:38:50.697315       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ab3db50d-ff82-4675-a746-fa48bbe8e878", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0701 19:38:50.697112       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3bcbbdbe-1f49-469b-963b-505c9a158c63 351 0 2023-07-01 19:37:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-07-01 19:37:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ab3db50d-ff82-4675-a746-fa48bbe8e878 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ab3db50d-ff82-4675-a746-fa48bbe8e878 647 0 2023-07-01 19:38:50 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-07-01 19:38:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-07-01 19:38:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0701 19:38:50.697788       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ab3db50d-ff82-4675-a746-fa48bbe8e878" provisioned
	I0701 19:38:50.697832       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0701 19:38:50.697853       1 volume_store.go:212] Trying to save persistentvolume "pvc-ab3db50d-ff82-4675-a746-fa48bbe8e878"
	I0701 19:38:50.702858       1 volume_store.go:219] persistentvolume "pvc-ab3db50d-ff82-4675-a746-fa48bbe8e878" saved
	I0701 19:38:50.703011       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ab3db50d-ff82-4675-a746-fa48bbe8e878", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ab3db50d-ff82-4675-a746-fa48bbe8e878
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-011000 -n functional-011000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-011000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-011000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-011000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-011000/192.168.105.4
	Start Time:       Sat, 01 Jul 2023 12:39:19 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://d4f67b4799331c12042118d00f770fd92cd9f9959520a148e250bdae84d8f8aa
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Jul 2023 12:39:21 -0700
	      Finished:     Sat, 01 Jul 2023 12:39:21 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxrwh (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jxrwh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-011000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.796901768s (1.796912144s including waiting)
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-011000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-011000 image ls --format yaml --alsologtostderr:
I0701 12:39:52.053162    2307 out.go:296] Setting OutFile to fd 1 ...
I0701 12:39:52.053295    2307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.053299    2307 out.go:309] Setting ErrFile to fd 2...
I0701 12:39:52.053301    2307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.053376    2307 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:39:52.053789    2307 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.053845    2307 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
W0701 12:39:52.054080    2307 cache_images.go:695] error getting status for functional-011000: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/monitor: connect: connection refused
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-933000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-933000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 0119ec8a1314
	Removing intermediate container 0119ec8a1314
	 ---> c3758fb285bc
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1a2d702d8f02
	Removing intermediate container 1a2d702d8f02
	 ---> 6db111576416
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in ee176422a802
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-933000 -n image-933000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-933000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| docker-env     | functional-011000 docker-env                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| docker-env     | functional-011000 docker-env                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| ssh            | functional-011000 ssh sudo cat                           | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | /etc/test/nested/copy/1461/hosts                         |                   |         |         |                     |                     |
	| update-context | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-011000 image load --daemon                    | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-011000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image ls                               | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| image          | functional-011000 image save                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-011000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image rm                               | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-011000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image ls                               | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| image          | functional-011000 image load                             | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image ls                               | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| image          | functional-011000 image save --daemon                    | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-011000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-011000 ssh pgrep                              | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image build -t                         | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | localhost/my-image:functional-011000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-011000                                        | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-011000 image ls                               | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| delete         | -p functional-011000                                     | functional-011000 | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| start          | -p image-933000 --driver=qemu2                           | image-933000      | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:40 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-933000      | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-933000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-933000      | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-933000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:39:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:39:54.652025    2334 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:39:54.652153    2334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:54.652155    2334 out.go:309] Setting ErrFile to fd 2...
	I0701 12:39:54.652157    2334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:54.652215    2334 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:39:54.653283    2334 out.go:303] Setting JSON to false
	I0701 12:39:54.669377    2334 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":564,"bootTime":1688239830,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:39:54.669447    2334 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:39:54.673586    2334 out.go:177] * [image-933000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:39:54.680475    2334 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:39:54.680478    2334 notify.go:220] Checking for updates...
	I0701 12:39:54.684598    2334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:39:54.687635    2334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:39:54.688577    2334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:39:54.691628    2334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:39:54.694617    2334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:39:54.697800    2334 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:39:54.701578    2334 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:39:54.708598    2334 start.go:297] selected driver: qemu2
	I0701 12:39:54.708604    2334 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:39:54.708609    2334 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:39:54.708673    2334 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:39:54.711596    2334 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:39:54.716749    2334 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 12:39:54.716836    2334 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:39:54.716851    2334 cni.go:84] Creating CNI manager for ""
	I0701 12:39:54.716857    2334 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:39:54.716860    2334 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:39:54.716865    2334 start_flags.go:319] config:
	{Name:image-933000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:39:54.720933    2334 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:39:54.727598    2334 out.go:177] * Starting control plane node image-933000 in cluster image-933000
	I0701 12:39:54.731632    2334 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:39:54.731659    2334 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:39:54.731667    2334 cache.go:57] Caching tarball of preloaded images
	I0701 12:39:54.731734    2334 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:39:54.731737    2334 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:39:54.731910    2334 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/config.json ...
	I0701 12:39:54.731932    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/config.json: {Name:mkdea5220e49a4b9bcb35b3bbba20e5169b19e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:39:54.732115    2334 start.go:365] acquiring machines lock for image-933000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:39:54.732141    2334 start.go:369] acquired machines lock for "image-933000" in 22.333µs
	I0701 12:39:54.732159    2334 start.go:93] Provisioning new machine with config: &{Name:image-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:image-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:39:54.732185    2334 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:39:54.739579    2334 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0701 12:39:54.759354    2334 start.go:159] libmachine.API.Create for "image-933000" (driver="qemu2")
	I0701 12:39:54.759375    2334 client.go:168] LocalClient.Create starting
	I0701 12:39:54.759428    2334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:39:54.759452    2334 main.go:141] libmachine: Decoding PEM data...
	I0701 12:39:54.759468    2334 main.go:141] libmachine: Parsing certificate...
	I0701 12:39:54.759499    2334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:39:54.759512    2334 main.go:141] libmachine: Decoding PEM data...
	I0701 12:39:54.759521    2334 main.go:141] libmachine: Parsing certificate...
	I0701 12:39:54.759901    2334 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:39:55.111589    2334 main.go:141] libmachine: Creating SSH key...
	I0701 12:39:55.195418    2334 main.go:141] libmachine: Creating Disk image...
	I0701 12:39:55.195422    2334 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:39:55.195568    2334 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2
	I0701 12:39:55.211877    2334 main.go:141] libmachine: STDOUT: 
	I0701 12:39:55.211894    2334 main.go:141] libmachine: STDERR: 
	I0701 12:39:55.211946    2334 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2 +20000M
	I0701 12:39:55.219310    2334 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:39:55.219321    2334 main.go:141] libmachine: STDERR: 
	I0701 12:39:55.219351    2334 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2
	I0701 12:39:55.219355    2334 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:39:55.219392    2334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:16:f5:2a:29:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/disk.qcow2
	I0701 12:39:55.262900    2334 main.go:141] libmachine: STDOUT: 
	I0701 12:39:55.262935    2334 main.go:141] libmachine: STDERR: 
	I0701 12:39:55.262938    2334 main.go:141] libmachine: Attempt 0
	I0701 12:39:55.262957    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:39:55.263024    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:39:55.263041    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:39:55.263048    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:39:55.263053    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:39:57.265192    2334 main.go:141] libmachine: Attempt 1
	I0701 12:39:57.265238    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:39:57.265482    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:39:57.265526    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:39:57.265552    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:39:57.265605    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:39:59.267718    2334 main.go:141] libmachine: Attempt 2
	I0701 12:39:59.267733    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:39:59.267844    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:39:59.267861    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:39:59.267866    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:39:59.267870    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:01.269866    2334 main.go:141] libmachine: Attempt 3
	I0701 12:40:01.269872    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:40:01.269918    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:40:01.269924    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:01.269928    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:01.269932    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:03.271918    2334 main.go:141] libmachine: Attempt 4
	I0701 12:40:03.271923    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:40:03.271967    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:40:03.271973    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:03.271978    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:03.271983    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:05.274020    2334 main.go:141] libmachine: Attempt 5
	I0701 12:40:05.274030    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:40:05.274121    2334 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0701 12:40:05.274128    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:05.274139    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:05.274144    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:07.276175    2334 main.go:141] libmachine: Attempt 6
	I0701 12:40:07.276197    2334 main.go:141] libmachine: Searching for c2:16:f5:2a:29:a6 in /var/db/dhcpd_leases ...
	I0701 12:40:07.276316    2334 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:07.276325    2334 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:07.276329    2334 main.go:141] libmachine: Found match: c2:16:f5:2a:29:a6
	I0701 12:40:07.276339    2334 main.go:141] libmachine: IP: 192.168.105.5
	I0701 12:40:07.276344    2334 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0701 12:40:08.281613    2334 machine.go:88] provisioning docker machine ...
	I0701 12:40:08.281632    2334 buildroot.go:166] provisioning hostname "image-933000"
	I0701 12:40:08.281702    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:08.281979    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:08.281982    2334 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-933000 && echo "image-933000" | sudo tee /etc/hostname
	I0701 12:40:08.340292    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: image-933000
	
	I0701 12:40:08.340352    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:08.340605    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:08.340611    2334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-933000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-933000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-933000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:40:08.398224    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:40:08.398230    2334 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1041/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1041/.minikube}
	I0701 12:40:08.398239    2334 buildroot.go:174] setting up certificates
	I0701 12:40:08.398244    2334 provision.go:83] configureAuth start
	I0701 12:40:08.398246    2334 provision.go:138] copyHostCerts
	I0701 12:40:08.398316    2334 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem, removing ...
	I0701 12:40:08.398320    2334 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem
	I0701 12:40:08.398430    2334 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem (1078 bytes)
	I0701 12:40:08.398605    2334 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem, removing ...
	I0701 12:40:08.398607    2334 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem
	I0701 12:40:08.398641    2334 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem (1123 bytes)
	I0701 12:40:08.398744    2334 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem, removing ...
	I0701 12:40:08.398745    2334 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem
	I0701 12:40:08.398781    2334 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem (1679 bytes)
	I0701 12:40:08.398849    2334 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem org=jenkins.image-933000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-933000]
	I0701 12:40:08.566845    2334 provision.go:172] copyRemoteCerts
	I0701 12:40:08.566895    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:40:08.566901    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:08.597708    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:40:08.604517    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:40:08.611058    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0701 12:40:08.618227    2334 provision.go:86] duration metric: configureAuth took 219.984708ms
	I0701 12:40:08.618232    2334 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:40:08.618333    2334 config.go:182] Loaded profile config "image-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:40:08.618367    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:08.618584    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:08.618587    2334 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:40:08.672258    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:40:08.672262    2334 buildroot.go:70] root file system type: tmpfs
	I0701 12:40:08.672322    2334 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:40:08.672380    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:08.672609    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:08.672641    2334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:40:08.734805    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:40:08.734849    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:08.735092    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:08.735100    2334 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:40:09.083501    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:40:09.083509    2334 machine.go:91] provisioned docker machine in 801.904917ms
	I0701 12:40:09.083513    2334 client.go:171] LocalClient.Create took 14.324407084s
	I0701 12:40:09.083527    2334 start.go:167] duration metric: libmachine.API.Create for "image-933000" took 14.324449333s
	I0701 12:40:09.083530    2334 start.go:300] post-start starting for "image-933000" (driver="qemu2")
	I0701 12:40:09.083533    2334 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:40:09.083604    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:40:09.083612    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:09.114702    2334 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:40:09.116000    2334 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 12:40:09.116004    2334 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/addons for local assets ...
	I0701 12:40:09.116066    2334 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/files for local assets ...
	I0701 12:40:09.116177    2334 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem -> 14612.pem in /etc/ssl/certs
	I0701 12:40:09.116292    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:40:09.118753    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:40:09.125213    2334 start.go:303] post-start completed in 41.68125ms
	I0701 12:40:09.125612    2334 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/config.json ...
	I0701 12:40:09.125769    2334 start.go:128] duration metric: createHost completed in 14.393853666s
	I0701 12:40:09.125798    2334 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:09.126022    2334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ae1100] 0x100ae3b60 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0701 12:40:09.126025    2334 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:40:09.178847    2334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688240409.517322668
	
	I0701 12:40:09.178851    2334 fix.go:206] guest clock: 1688240409.517322668
	I0701 12:40:09.178854    2334 fix.go:219] Guest: 2023-07-01 12:40:09.517322668 -0700 PDT Remote: 2023-07-01 12:40:09.125772 -0700 PDT m=+14.494219168 (delta=391.550668ms)
	I0701 12:40:09.178862    2334 fix.go:190] guest clock delta is within tolerance: 391.550668ms
	I0701 12:40:09.178864    2334 start.go:83] releasing machines lock for "image-933000", held for 14.44699375s
	I0701 12:40:09.179176    2334 ssh_runner.go:195] Run: cat /version.json
	I0701 12:40:09.179182    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:09.179199    2334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:40:09.179215    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:09.247640    2334 ssh_runner.go:195] Run: systemctl --version
	I0701 12:40:09.249815    2334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:40:09.251843    2334 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:40:09.251871    2334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 12:40:09.257045    2334 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:40:09.257050    2334 start.go:466] detecting cgroup driver to use...
	I0701 12:40:09.257113    2334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:40:09.262477    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0701 12:40:09.265841    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:40:09.268950    2334 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:40:09.268972    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:40:09.271809    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:40:09.274897    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:40:09.278276    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:40:09.281696    2334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:40:09.284790    2334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:40:09.287633    2334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:40:09.290623    2334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:40:09.293849    2334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:09.369885    2334 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:40:09.376868    2334 start.go:466] detecting cgroup driver to use...
	I0701 12:40:09.376931    2334 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:40:09.382280    2334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:40:09.387095    2334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:40:09.393148    2334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:40:09.397573    2334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:40:09.402179    2334 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:40:09.450319    2334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:40:09.455915    2334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:40:09.461576    2334 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:40:09.462875    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:40:09.465754    2334 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:40:09.470649    2334 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:40:09.559970    2334 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:40:09.622058    2334 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:40:09.622067    2334 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0701 12:40:09.627747    2334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:09.711896    2334 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:40:10.873055    2334 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161171625s)
	I0701 12:40:10.873103    2334 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:40:10.947479    2334 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0701 12:40:11.024704    2334 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0701 12:40:11.084627    2334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:11.162757    2334 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0701 12:40:11.170103    2334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:11.248594    2334 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0701 12:40:11.271485    2334 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0701 12:40:11.271580    2334 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0701 12:40:11.273811    2334 start.go:534] Will wait 60s for crictl version
	I0701 12:40:11.273841    2334 ssh_runner.go:195] Run: which crictl
	I0701 12:40:11.275394    2334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 12:40:11.291203    2334 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0701 12:40:11.291264    2334 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:40:11.300885    2334 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:40:11.317848    2334 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0701 12:40:11.317927    2334 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0701 12:40:11.319391    2334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:40:11.323109    2334 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:40:11.323147    2334 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:40:11.328602    2334 docker.go:636] Got preloaded images: 
	I0701 12:40:11.328605    2334 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0701 12:40:11.328645    2334 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:40:11.331647    2334 ssh_runner.go:195] Run: which lz4
	I0701 12:40:11.333004    2334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0701 12:40:11.334223    2334 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 12:40:11.334232    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0701 12:40:12.590567    2334 docker.go:600] Took 1.257637 seconds to copy over tarball
	I0701 12:40:12.590620    2334 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 12:40:13.617001    2334 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.026369583s)
	I0701 12:40:13.617014    2334 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 12:40:13.632017    2334 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:40:13.634987    2334 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0701 12:40:13.640330    2334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:13.704511    2334 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:40:15.146907    2334 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.442411041s)
	I0701 12:40:15.146981    2334 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:40:15.152801    2334 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0701 12:40:15.152808    2334 cache_images.go:84] Images are preloaded, skipping loading
	I0701 12:40:15.152870    2334 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:40:15.160890    2334 cni.go:84] Creating CNI manager for ""
	I0701 12:40:15.160898    2334 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:40:15.160906    2334 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 12:40:15.160914    2334 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-933000 NodeName:image-933000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 12:40:15.160987    2334 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-933000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:40:15.161028    2334 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-933000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:image-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 12:40:15.161094    2334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0701 12:40:15.163949    2334 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:40:15.163976    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 12:40:15.166700    2334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0701 12:40:15.171605    2334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 12:40:15.176440    2334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0701 12:40:15.181199    2334 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0701 12:40:15.182371    2334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:40:15.186379    2334 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000 for IP: 192.168.105.5
	I0701 12:40:15.186386    2334 certs.go:190] acquiring lock for shared ca certs: {Name:mk0d2f6007eea276ce17a3a9c6aca904411113ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.186528    2334 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key
	I0701 12:40:15.186564    2334 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key
	I0701 12:40:15.186593    2334 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.key
	I0701 12:40:15.186599    2334 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.crt with IP's: []
	I0701 12:40:15.325497    2334 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.crt ...
	I0701 12:40:15.325500    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.crt: {Name:mk5861936f7d69c6560b5b3006907a9696087329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.325729    2334 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.key ...
	I0701 12:40:15.325731    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/client.key: {Name:mk6945ef3f1c4460311d8b22efc8a2e077811320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.325845    2334 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key.e69b33ca
	I0701 12:40:15.325851    2334 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 12:40:15.431902    2334 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt.e69b33ca ...
	I0701 12:40:15.431904    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt.e69b33ca: {Name:mkbabcb82b9df9ebead2e53e2f6279d6062abb03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.432043    2334 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key.e69b33ca ...
	I0701 12:40:15.432045    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key.e69b33ca: {Name:mk6d0ee6cd4fc25d240ad28fed8dab422b61689a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.432153    2334 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt
	I0701 12:40:15.432329    2334 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key
	I0701 12:40:15.432433    2334 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.key
	I0701 12:40:15.432440    2334 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.crt with IP's: []
	I0701 12:40:15.508729    2334 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.crt ...
	I0701 12:40:15.508733    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.crt: {Name:mkdf8066a91ac641b76a7cb8d169ac74f4a7d5af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.508942    2334 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.key ...
	I0701 12:40:15.508944    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.key: {Name:mk21ed51c17ea8499d40af209870045922c4e1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:15.509199    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem (1338 bytes)
	W0701 12:40:15.509231    2334 certs.go:433] ignoring /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461_empty.pem, impossibly tiny 0 bytes
	I0701 12:40:15.509247    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 12:40:15.509265    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:40:15.509280    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:40:15.509296    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem (1679 bytes)
	I0701 12:40:15.509336    2334 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:40:15.509647    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 12:40:15.517576    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 12:40:15.524543    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:40:15.530984    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/image-933000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:40:15.537990    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:40:15.544801    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:40:15.551153    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:40:15.558295    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:40:15.565251    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:40:15.571873    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem --> /usr/share/ca-certificates/1461.pem (1338 bytes)
	I0701 12:40:15.578428    2334 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /usr/share/ca-certificates/14612.pem (1708 bytes)
	I0701 12:40:15.585493    2334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:40:15.590375    2334 ssh_runner.go:195] Run: openssl version
	I0701 12:40:15.592343    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:40:15.595321    2334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:15.596825    2334 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  1 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:15.596846    2334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:15.598629    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:40:15.601797    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1461.pem && ln -fs /usr/share/ca-certificates/1461.pem /etc/ssl/certs/1461.pem"
	I0701 12:40:15.604820    2334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1461.pem
	I0701 12:40:15.606183    2334 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  1 19:36 /usr/share/ca-certificates/1461.pem
	I0701 12:40:15.606203    2334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1461.pem
	I0701 12:40:15.607964    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1461.pem /etc/ssl/certs/51391683.0"
	I0701 12:40:15.611036    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14612.pem && ln -fs /usr/share/ca-certificates/14612.pem /etc/ssl/certs/14612.pem"
	I0701 12:40:15.614450    2334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14612.pem
	I0701 12:40:15.615991    2334 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  1 19:36 /usr/share/ca-certificates/14612.pem
	I0701 12:40:15.616010    2334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14612.pem
	I0701 12:40:15.617674    2334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14612.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:40:15.620478    2334 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0701 12:40:15.621759    2334 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0701 12:40:15.621787    2334 kubeadm.go:404] StartCluster: {Name:image-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.3 ClusterName:image-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:40:15.621852    2334 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:40:15.627294    2334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 12:40:15.630507    2334 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 12:40:15.633416    2334 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 12:40:15.636040    2334 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 12:40:15.636051    2334 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0701 12:40:15.659111    2334 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0701 12:40:15.659166    2334 kubeadm.go:322] [preflight] Running pre-flight checks
	I0701 12:40:15.711190    2334 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 12:40:15.711243    2334 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 12:40:15.711286    2334 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 12:40:15.770281    2334 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 12:40:15.778444    2334 out.go:204]   - Generating certificates and keys ...
	I0701 12:40:15.778487    2334 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0701 12:40:15.778514    2334 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0701 12:40:15.870130    2334 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0701 12:40:15.936124    2334 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0701 12:40:16.128842    2334 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0701 12:40:16.218251    2334 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0701 12:40:16.278155    2334 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0701 12:40:16.278216    2334 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-933000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0701 12:40:16.322603    2334 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0701 12:40:16.322668    2334 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-933000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0701 12:40:16.379273    2334 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0701 12:40:16.488099    2334 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0701 12:40:16.598620    2334 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0701 12:40:16.598652    2334 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 12:40:16.705214    2334 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 12:40:16.774094    2334 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 12:40:16.820047    2334 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 12:40:16.958895    2334 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 12:40:16.965879    2334 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 12:40:16.966276    2334 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 12:40:16.966360    2334 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0701 12:40:17.035021    2334 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 12:40:17.043196    2334 out.go:204]   - Booting up control plane ...
	I0701 12:40:17.043242    2334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 12:40:17.043273    2334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 12:40:17.043300    2334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 12:40:17.043355    2334 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 12:40:17.043432    2334 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 12:40:21.043559    2334 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.008057 seconds
	I0701 12:40:21.043731    2334 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 12:40:21.058139    2334 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 12:40:21.571841    2334 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 12:40:21.571928    2334 kubeadm.go:322] [mark-control-plane] Marking the node image-933000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 12:40:22.080772    2334 kubeadm.go:322] [bootstrap-token] Using token: nme34f.wq09p8skv583ra41
	I0701 12:40:22.084112    2334 out.go:204]   - Configuring RBAC rules ...
	I0701 12:40:22.084196    2334 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 12:40:22.085393    2334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 12:40:22.089801    2334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 12:40:22.091275    2334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 12:40:22.093129    2334 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 12:40:22.094622    2334 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 12:40:22.099464    2334 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 12:40:22.268542    2334 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0701 12:40:22.487599    2334 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0701 12:40:22.488461    2334 kubeadm.go:322] 
	I0701 12:40:22.488496    2334 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0701 12:40:22.488498    2334 kubeadm.go:322] 
	I0701 12:40:22.488540    2334 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0701 12:40:22.488541    2334 kubeadm.go:322] 
	I0701 12:40:22.488552    2334 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0701 12:40:22.488578    2334 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 12:40:22.488603    2334 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 12:40:22.488606    2334 kubeadm.go:322] 
	I0701 12:40:22.488640    2334 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0701 12:40:22.488644    2334 kubeadm.go:322] 
	I0701 12:40:22.488667    2334 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 12:40:22.488669    2334 kubeadm.go:322] 
	I0701 12:40:22.488703    2334 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0701 12:40:22.488740    2334 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 12:40:22.488777    2334 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 12:40:22.488780    2334 kubeadm.go:322] 
	I0701 12:40:22.488830    2334 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 12:40:22.488870    2334 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0701 12:40:22.488871    2334 kubeadm.go:322] 
	I0701 12:40:22.488922    2334 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nme34f.wq09p8skv583ra41 \
	I0701 12:40:22.488984    2334 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 \
	I0701 12:40:22.489010    2334 kubeadm.go:322] 	--control-plane 
	I0701 12:40:22.489016    2334 kubeadm.go:322] 
	I0701 12:40:22.489057    2334 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0701 12:40:22.489058    2334 kubeadm.go:322] 
	I0701 12:40:22.489103    2334 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nme34f.wq09p8skv583ra41 \
	I0701 12:40:22.489157    2334 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 
	I0701 12:40:22.489230    2334 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 12:40:22.489243    2334 cni.go:84] Creating CNI manager for ""
	I0701 12:40:22.489249    2334 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:40:22.494689    2334 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0701 12:40:22.498353    2334 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0701 12:40:22.501268    2334 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0701 12:40:22.505935    2334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 12:40:22.505997    2334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2 minikube.k8s.io/name=image-933000 minikube.k8s.io/updated_at=2023_07_01T12_40_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:40:22.506000    2334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:40:22.509283    2334 ops.go:34] apiserver oom_adj: -16
	I0701 12:40:22.580746    2334 kubeadm.go:1081] duration metric: took 74.78775ms to wait for elevateKubeSystemPrivileges.
	I0701 12:40:22.580758    2334 kubeadm.go:406] StartCluster complete in 6.959104208s
	I0701 12:40:22.580766    2334 settings.go:142] acquiring lock: {Name:mk1853b69cc489034eba1c68e94bf3f8bc0ceb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:22.580856    2334 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:40:22.581184    2334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/kubeconfig: {Name:mk6d6ec6f258eefdfd78eed77d0a2eac619f380e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:22.581374    2334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 12:40:22.581400    2334 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0701 12:40:22.581464    2334 addons.go:66] Setting storage-provisioner=true in profile "image-933000"
	I0701 12:40:22.581471    2334 addons.go:228] Setting addon storage-provisioner=true in "image-933000"
	I0701 12:40:22.581477    2334 config.go:182] Loaded profile config "image-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:40:22.581480    2334 addons.go:66] Setting default-storageclass=true in profile "image-933000"
	I0701 12:40:22.581488    2334 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-933000"
	I0701 12:40:22.581494    2334 host.go:66] Checking if "image-933000" exists ...
	I0701 12:40:22.585800    2334 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:40:22.588789    2334 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 12:40:22.588792    2334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 12:40:22.588799    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:22.593065    2334 addons.go:228] Setting addon default-storageclass=true in "image-933000"
	I0701 12:40:22.593077    2334 host.go:66] Checking if "image-933000" exists ...
	I0701 12:40:22.593699    2334 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 12:40:22.593703    2334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 12:40:22.593707    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/image-933000/id_rsa Username:docker}
	I0701 12:40:22.628089    2334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 12:40:22.631351    2334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 12:40:22.661371    2334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 12:40:23.055711    2334 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0701 12:40:23.098739    2334 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-933000" context rescaled to 1 replicas
	I0701 12:40:23.098756    2334 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:40:23.107234    2334 out.go:177] * Verifying Kubernetes components...
	I0701 12:40:23.111318    2334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:40:23.147359    2334 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 12:40:23.144470    2334 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:40:23.155243    2334 addons.go:499] enable addons completed in 573.861375ms: enabled=[storage-provisioner default-storageclass]
	I0701 12:40:23.155269    2334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:40:23.159231    2334 api_server.go:72] duration metric: took 60.467209ms to wait for apiserver process to appear ...
	I0701 12:40:23.159235    2334 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:40:23.159244    2334 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0701 12:40:23.162150    2334 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0701 12:40:23.162804    2334 api_server.go:141] control plane version: v1.27.3
	I0701 12:40:23.162808    2334 api_server.go:131] duration metric: took 3.572292ms to wait for apiserver health ...
	I0701 12:40:23.162811    2334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:40:23.165878    2334 system_pods.go:59] 5 kube-system pods found
	I0701 12:40:23.165884    2334 system_pods.go:61] "etcd-image-933000" [05dd331a-6c48-4ac3-aba9-52647bf12b8a] Pending
	I0701 12:40:23.165886    2334 system_pods.go:61] "kube-apiserver-image-933000" [c4ca855a-8ec3-4297-82f9-5d330ef30737] Pending
	I0701 12:40:23.165888    2334 system_pods.go:61] "kube-controller-manager-image-933000" [ffb29317-e5fb-4914-a016-770883cf8833] Pending
	I0701 12:40:23.165889    2334 system_pods.go:61] "kube-scheduler-image-933000" [c3bc1c49-e7c5-4781-9297-b8512c0994cf] Pending
	I0701 12:40:23.165891    2334 system_pods.go:61] "storage-provisioner" [79cdecbc-8802-479b-a7c2-bc0cddb6a3e9] Pending
	I0701 12:40:23.165892    2334 system_pods.go:74] duration metric: took 3.080167ms to wait for pod list to return data ...
	I0701 12:40:23.165895    2334 kubeadm.go:581] duration metric: took 67.1325ms to wait for : map[apiserver:true system_pods:true] ...
	I0701 12:40:23.165900    2334 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:40:23.167212    2334 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0701 12:40:23.167221    2334 node_conditions.go:123] node cpu capacity is 2
	I0701 12:40:23.167225    2334 node_conditions.go:105] duration metric: took 1.3235ms to run NodePressure ...
	I0701 12:40:23.167229    2334 start.go:228] waiting for startup goroutines ...
	I0701 12:40:23.167232    2334 start.go:233] waiting for cluster config update ...
	I0701 12:40:23.167236    2334 start.go:242] writing updated cluster config ...
	I0701 12:40:23.167499    2334 ssh_runner.go:195] Run: rm -f paused
	I0701 12:40:23.196287    2334 start.go:642] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0701 12:40:23.200319    2334 out.go:177] * Done! kubectl is now configured to use "image-933000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-07-01 19:40:06 UTC, ends at Sat 2023-07-01 19:40:25 UTC. --
	Jul 01 19:40:18 image-933000 cri-dockerd[998]: time="2023-07-01T19:40:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/75a7a570efd714da5280f9f36dd7751e765465859557bfbbe205829b82e55f0d/resolv.conf as [nameserver 192.168.105.1]"
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.465531422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.465660422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.465690089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.465714006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:18 image-933000 cri-dockerd[998]: time="2023-07-01T19:40:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3b7a04e6d4a55b707d8a64bcb70d70eedc4c9b4eec6ba61078be5bfc1c2de36/resolv.conf as [nameserver 192.168.105.1]"
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.514637464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.514664964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.514671631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.514676089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.536861714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.537020881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.537063964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:40:18 image-933000 dockerd[1105]: time="2023-07-01T19:40:18.537108131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:25 image-933000 dockerd[1099]: time="2023-07-01T19:40:25.052948175Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 01 19:40:25 image-933000 dockerd[1099]: time="2023-07-01T19:40:25.170652259Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 01 19:40:25 image-933000 dockerd[1099]: time="2023-07-01T19:40:25.186139551Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.217180259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.217208884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.217215384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.217376259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:40:25 image-933000 dockerd[1099]: time="2023-07-01T19:40:25.341262176Z" level=info msg="ignoring event" container=ee176422a802319b664d8b33a7ee17062f9131c5dfd705547fbf0dcb57d3af24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.341377092Z" level=info msg="shim disconnected" id=ee176422a802319b664d8b33a7ee17062f9131c5dfd705547fbf0dcb57d3af24 namespace=moby
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.341487384Z" level=warning msg="cleaning up after shim disconnected" id=ee176422a802319b664d8b33a7ee17062f9131c5dfd705547fbf0dcb57d3af24 namespace=moby
	Jul 01 19:40:25 image-933000 dockerd[1105]: time="2023-07-01T19:40:25.341491884Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1fcad965dd568       bcb9e554eaab6       7 seconds ago       Running             kube-scheduler            0                   e3b7a04e6d4a5
	c8a0184fcc319       ab3683b584ae5       7 seconds ago       Running             kube-controller-manager   0                   75a7a570efd71
	733f1793cc115       39dfb036b0986       7 seconds ago       Running             kube-apiserver            0                   0fc9e063560ae
	624ae9fb14276       24bc64e911039       7 seconds ago       Running             etcd                      0                   9f6251c5db70d
	
	* 
	* ==> describe nodes <==
	* Name:               image-933000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-933000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2
	                    minikube.k8s.io/name=image-933000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_01T12_40_22_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Jul 2023 19:40:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-933000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Jul 2023 19:40:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Jul 2023 19:40:22 +0000   Sat, 01 Jul 2023 19:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Jul 2023 19:40:22 +0000   Sat, 01 Jul 2023 19:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Jul 2023 19:40:22 +0000   Sat, 01 Jul 2023 19:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Jul 2023 19:40:22 +0000   Sat, 01 Jul 2023 19:40:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-933000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a1712bcd7314bd38164344c869eb271
	  System UUID:                1a1712bcd7314bd38164344c869eb271
	  Boot ID:                    b9210d4c-0f1c-4fdf-87e8-864392bf3cfc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-933000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-933000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-933000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-933000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-933000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-933000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-933000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-933000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-933000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-933000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Jul 1 19:40] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.648752] EINJ: EINJ table not found.
	[  +0.515956] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044031] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000792] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.144715] systemd-fstab-generator[478]: Ignoring "noauto" for root device
	[  +0.087566] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.408568] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.190040] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +0.065946] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[  +0.087655] systemd-fstab-generator[727]: Ignoring "noauto" for root device
	[  +1.149050] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.086846] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[  +0.077584] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +0.060247] systemd-fstab-generator[937]: Ignoring "noauto" for root device
	[  +0.076418] systemd-fstab-generator[948]: Ignoring "noauto" for root device
	[  +0.086171] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +2.457025] systemd-fstab-generator[1092]: Ignoring "noauto" for root device
	[  +3.323752] systemd-fstab-generator[1420]: Ignoring "noauto" for root device
	[  +0.368405] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.780478] systemd-fstab-generator[2310]: Ignoring "noauto" for root device
	[  +2.722915] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [624ae9fb1427] <==
	* {"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-07-01T19:40:18.580Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-01T19:40:19.141Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:40:19.142Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-933000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-01T19:40:19.142Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:40:19.143Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-07-01T19:40:19.143Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-01T19:40:19.143Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-01T19:40:19.143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-01T19:40:19.143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-01T19:40:19.145Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:40:19.147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-01T19:40:19.147Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  19:40:25 up 0 min,  0 users,  load average: 0.35, 0.08, 0.03
	Linux image-933000 5.10.57 #1 SMP PREEMPT Thu Jun 22 18:49:06 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [733f1793cc11] <==
	* I0701 19:40:19.917601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 19:40:19.918052       1 controller.go:624] quota admission added evaluator for: namespaces
	I0701 19:40:19.919183       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0701 19:40:19.919335       1 aggregator.go:152] initial CRD sync complete...
	I0701 19:40:19.919344       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 19:40:19.919346       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 19:40:19.919349       1 cache.go:39] Caches are synced for autoregister controller
	I0701 19:40:19.921574       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0701 19:40:19.921592       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 19:40:19.933792       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0701 19:40:19.939209       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 19:40:20.670273       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 19:40:20.830448       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0701 19:40:20.839201       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0701 19:40:20.839495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 19:40:20.997965       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 19:40:21.010758       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0701 19:40:21.087215       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0701 19:40:21.089892       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0701 19:40:21.090599       1 controller.go:624] quota admission added evaluator for: endpoints
	I0701 19:40:21.093011       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 19:40:21.875184       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0701 19:40:22.601744       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0701 19:40:22.606895       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0701 19:40:22.611046       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [c8a0184fcc31] <==
	* I0701 19:40:23.918271       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0701 19:40:23.918276       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0701 19:40:23.918291       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0701 19:40:23.918958       1 controllermanager.go:638] "Started controller" controller="csrsigning"
	I0701 19:40:23.918984       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0701 19:40:23.918987       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0701 19:40:23.918994       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0701 19:40:24.067999       1 controllermanager.go:638] "Started controller" controller="persistentvolume-expander"
	I0701 19:40:24.068034       1 expand_controller.go:339] "Starting expand controller"
	I0701 19:40:24.068040       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0701 19:40:24.327596       1 controllermanager.go:638] "Started controller" controller="namespace"
	I0701 19:40:24.327674       1 namespace_controller.go:197] "Starting namespace controller"
	I0701 19:40:24.327694       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0701 19:40:24.567146       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0701 19:40:24.567158       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0701 19:40:24.567177       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0701 19:40:24.567193       1 controllermanager.go:638] "Started controller" controller="garbagecollector"
	I0701 19:40:24.817327       1 controllermanager.go:638] "Started controller" controller="replicaset"
	I0701 19:40:24.817368       1 replica_set.go:201] "Starting controller" name="replicaset"
	I0701 19:40:24.817377       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0701 19:40:24.967763       1 controllermanager.go:638] "Started controller" controller="bootstrapsigner"
	I0701 19:40:24.967797       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0701 19:40:25.117185       1 controllermanager.go:638] "Started controller" controller="endpoint"
	I0701 19:40:25.117268       1 endpoints_controller.go:172] Starting endpoint controller
	I0701 19:40:25.117279       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	
	* 
	* ==> kube-scheduler [1fcad965dd56] <==
	* W0701 19:40:19.900882       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 19:40:19.900885       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 19:40:19.900901       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 19:40:19.900908       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 19:40:19.900921       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 19:40:19.900924       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 19:40:19.900935       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 19:40:19.900938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 19:40:19.900952       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 19:40:19.900956       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 19:40:19.900967       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 19:40:19.900970       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 19:40:19.901005       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 19:40:19.901012       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0701 19:40:20.738403       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 19:40:20.738482       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 19:40:20.743120       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 19:40:20.743149       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 19:40:20.834186       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 19:40:20.836009       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 19:40:20.889409       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 19:40:20.889470       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 19:40:20.924116       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 19:40:20.924165       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0701 19:40:21.097621       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-07-01 19:40:06 UTC, ends at Sat 2023-07-01 19:40:25 UTC. --
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.752908    2329 topology_manager.go:212] "Topology Admit Handler"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.752930    2329 topology_manager.go:212] "Topology Admit Handler"
	Jul 01 19:40:22 image-933000 kubelet[2329]: E0701 19:40:22.756497    2329 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-image-933000\" already exists" pod="kube-system/etcd-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.756537    2329 kubelet_node_status.go:108] "Node was previously registered" node="image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.756635    2329 kubelet_node_status.go:73] "Successfully registered node" node="image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947435    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a4aa82744138b57643f0503b504261e-flexvolume-dir\") pod \"kube-controller-manager-image-933000\" (UID: \"2a4aa82744138b57643f0503b504261e\") " pod="kube-system/kube-controller-manager-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947460    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a4aa82744138b57643f0503b504261e-k8s-certs\") pod \"kube-controller-manager-image-933000\" (UID: \"2a4aa82744138b57643f0503b504261e\") " pod="kube-system/kube-controller-manager-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947476    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a4aa82744138b57643f0503b504261e-usr-share-ca-certificates\") pod \"kube-controller-manager-image-933000\" (UID: \"2a4aa82744138b57643f0503b504261e\") " pod="kube-system/kube-controller-manager-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947485    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/33e7de562fdfb71f0dcd17158dfd67c3-etcd-certs\") pod \"etcd-image-933000\" (UID: \"33e7de562fdfb71f0dcd17158dfd67c3\") " pod="kube-system/etcd-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947494    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28ae7ec2b5d825ebbff2f3b15860174a-ca-certs\") pod \"kube-apiserver-image-933000\" (UID: \"28ae7ec2b5d825ebbff2f3b15860174a\") " pod="kube-system/kube-apiserver-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947503    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28ae7ec2b5d825ebbff2f3b15860174a-k8s-certs\") pod \"kube-apiserver-image-933000\" (UID: \"28ae7ec2b5d825ebbff2f3b15860174a\") " pod="kube-system/kube-apiserver-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947513    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28ae7ec2b5d825ebbff2f3b15860174a-usr-share-ca-certificates\") pod \"kube-apiserver-image-933000\" (UID: \"28ae7ec2b5d825ebbff2f3b15860174a\") " pod="kube-system/kube-apiserver-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947521    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a4aa82744138b57643f0503b504261e-ca-certs\") pod \"kube-controller-manager-image-933000\" (UID: \"2a4aa82744138b57643f0503b504261e\") " pod="kube-system/kube-controller-manager-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947531    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a4aa82744138b57643f0503b504261e-kubeconfig\") pod \"kube-controller-manager-image-933000\" (UID: \"2a4aa82744138b57643f0503b504261e\") " pod="kube-system/kube-controller-manager-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947542    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6371b604d1ebb0c1e0061efd2f24208a-kubeconfig\") pod \"kube-scheduler-image-933000\" (UID: \"6371b604d1ebb0c1e0061efd2f24208a\") " pod="kube-system/kube-scheduler-image-933000"
	Jul 01 19:40:22 image-933000 kubelet[2329]: I0701 19:40:22.947551    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/33e7de562fdfb71f0dcd17158dfd67c3-etcd-data\") pod \"etcd-image-933000\" (UID: \"33e7de562fdfb71f0dcd17158dfd67c3\") " pod="kube-system/etcd-image-933000"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.633472    2329 apiserver.go:52] "Watching apiserver"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.646047    2329 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.652781    2329 reconciler.go:41] "Reconciler: start to sync state"
	Jul 01 19:40:23 image-933000 kubelet[2329]: E0701 19:40:23.699877    2329 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-933000\" already exists" pod="kube-system/kube-apiserver-image-933000"
	Jul 01 19:40:23 image-933000 kubelet[2329]: E0701 19:40:23.700176    2329 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-933000\" already exists" pod="kube-system/kube-scheduler-image-933000"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.710670    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-933000" podStartSLOduration=2.710647175 podCreationTimestamp="2023-07-01 19:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-01 19:40:23.706807217 +0000 UTC m=+1.115734126" watchObservedRunningTime="2023-07-01 19:40:23.710647175 +0000 UTC m=+1.119574085"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.716547    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-933000" podStartSLOduration=1.716528008 podCreationTimestamp="2023-07-01 19:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-01 19:40:23.7107498 +0000 UTC m=+1.119676668" watchObservedRunningTime="2023-07-01 19:40:23.716528008 +0000 UTC m=+1.125454918"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.716584    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-933000" podStartSLOduration=1.716566883 podCreationTimestamp="2023-07-01 19:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-01 19:40:23.716419675 +0000 UTC m=+1.125346585" watchObservedRunningTime="2023-07-01 19:40:23.716566883 +0000 UTC m=+1.125493793"
	Jul 01 19:40:23 image-933000 kubelet[2329]: I0701 19:40:23.723167    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-933000" podStartSLOduration=1.72314955 podCreationTimestamp="2023-07-01 19:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-01 19:40:23.719682758 +0000 UTC m=+1.128609668" watchObservedRunningTime="2023-07-01 19:40:23.72314955 +0000 UTC m=+1.132076460"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-933000 -n image-933000
helpers_test.go:261: (dbg) Run:  kubectl --context image-933000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-933000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-933000 describe pod storage-provisioner: exit status 1 (37.129709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-933000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-673000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-673000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.894111042s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-673000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-673000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [accf66f2-ad8c-4b14-a2c2-73af7c8465ab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [accf66f2-ad8c-4b14-a2c2-73af7c8465ab] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.014825334s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-673000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.041148917s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons disable ingress-dns --alsologtostderr -v=1: (4.180993917s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons disable ingress --alsologtostderr -v=1: (7.067319416s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-673000 -n ingress-addon-legacy-673000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-011000 image ls                               | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| image   | functional-011000 image load                             | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-011000 image ls                               | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| image   | functional-011000 image save --daemon                    | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-011000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-011000                                        | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-011000                                        | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-011000 ssh pgrep                              | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-011000                                        | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-011000 image build -t                         | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | localhost/my-image:functional-011000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-011000                                        | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-011000 image ls                               | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| delete  | -p functional-011000                                     | functional-011000           | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:39 PDT |
	| start   | -p image-933000 --driver=qemu2                           | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:39 PDT | 01 Jul 23 12:40 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-933000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-933000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-933000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-933000                                          |                             |         |         |                     |                     |
	| delete  | -p image-933000                                          | image-933000                | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:40 PDT |
	| start   | -p ingress-addon-legacy-673000                           | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:40 PDT | 01 Jul 23 12:42 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-673000                              | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:42 PDT | 01 Jul 23 12:42 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-673000                              | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:42 PDT | 01 Jul 23 12:42 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-673000                              | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:42 PDT | 01 Jul 23 12:42 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-673000 ip                           | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:42 PDT | 01 Jul 23 12:42 PDT |
	| addons  | ingress-addon-legacy-673000                              | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:43 PDT | 01 Jul 23 12:43 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-673000                              | ingress-addon-legacy-673000 | jenkins | v1.30.1 | 01 Jul 23 12:43 PDT | 01 Jul 23 12:43 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:40:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:40:26.128172    2374 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:40:26.128295    2374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:40:26.128298    2374 out.go:309] Setting ErrFile to fd 2...
	I0701 12:40:26.128300    2374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:40:26.128376    2374 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:40:26.129417    2374 out.go:303] Setting JSON to false
	I0701 12:40:26.144675    2374 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":596,"bootTime":1688239830,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:40:26.144731    2374 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:40:26.148715    2374 out.go:177] * [ingress-addon-legacy-673000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:40:26.155681    2374 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:40:26.155702    2374 notify.go:220] Checking for updates...
	I0701 12:40:26.159552    2374 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:40:26.162636    2374 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:40:26.165646    2374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:40:26.168594    2374 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:40:26.171662    2374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:40:26.174819    2374 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:40:26.177660    2374 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:40:26.184621    2374 start.go:297] selected driver: qemu2
	I0701 12:40:26.184627    2374 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:40:26.184633    2374 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:40:26.186513    2374 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:40:26.187802    2374 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:40:26.190751    2374 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:40:26.190778    2374 cni.go:84] Creating CNI manager for ""
	I0701 12:40:26.190788    2374 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:40:26.190794    2374 start_flags.go:319] config:
	{Name:ingress-addon-legacy-673000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-673000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0}
	I0701 12:40:26.194840    2374 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:40:26.201569    2374 out.go:177] * Starting control plane node ingress-addon-legacy-673000 in cluster ingress-addon-legacy-673000
	I0701 12:40:26.208722    2374 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0701 12:40:26.266738    2374 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0701 12:40:26.266753    2374 cache.go:57] Caching tarball of preloaded images
	I0701 12:40:26.266995    2374 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0701 12:40:26.275567    2374 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0701 12:40:26.283605    2374 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:40:26.362575    2374 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0701 12:40:31.424999    2374 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:40:31.425146    2374 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:40:32.172976    2374 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0701 12:40:32.173160    2374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/config.json ...
	I0701 12:40:32.173181    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/config.json: {Name:mk1ad79e05771614c8449993c11ee95c7f0d8fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:32.173416    2374 start.go:365] acquiring machines lock for ingress-addon-legacy-673000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:40:32.173443    2374 start.go:369] acquired machines lock for "ingress-addon-legacy-673000" in 21.292µs
	I0701 12:40:32.173454    2374 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-673000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-673000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:40:32.173491    2374 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:40:32.178516    2374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0701 12:40:32.193051    2374 start.go:159] libmachine.API.Create for "ingress-addon-legacy-673000" (driver="qemu2")
	I0701 12:40:32.193070    2374 client.go:168] LocalClient.Create starting
	I0701 12:40:32.193141    2374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:40:32.193165    2374 main.go:141] libmachine: Decoding PEM data...
	I0701 12:40:32.193175    2374 main.go:141] libmachine: Parsing certificate...
	I0701 12:40:32.193223    2374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:40:32.193239    2374 main.go:141] libmachine: Decoding PEM data...
	I0701 12:40:32.193248    2374 main.go:141] libmachine: Parsing certificate...
	I0701 12:40:32.193583    2374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:40:32.500563    2374 main.go:141] libmachine: Creating SSH key...
	I0701 12:40:32.602777    2374 main.go:141] libmachine: Creating Disk image...
	I0701 12:40:32.602782    2374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:40:32.602910    2374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2
	I0701 12:40:32.611502    2374 main.go:141] libmachine: STDOUT: 
	I0701 12:40:32.611529    2374 main.go:141] libmachine: STDERR: 
	I0701 12:40:32.611597    2374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2 +20000M
	I0701 12:40:32.618685    2374 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:40:32.618697    2374 main.go:141] libmachine: STDERR: 
	I0701 12:40:32.618716    2374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2
	I0701 12:40:32.618724    2374 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:40:32.618758    2374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f2:35:1a:de:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/disk.qcow2
	I0701 12:40:32.652876    2374 main.go:141] libmachine: STDOUT: 
	I0701 12:40:32.652899    2374 main.go:141] libmachine: STDERR: 
	I0701 12:40:32.652903    2374 main.go:141] libmachine: Attempt 0
	I0701 12:40:32.652913    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:32.652981    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:32.653005    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:32.653024    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:32.653029    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:32.653035    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:34.655144    2374 main.go:141] libmachine: Attempt 1
	I0701 12:40:34.655223    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:34.655601    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:34.655652    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:34.655730    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:34.655763    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:34.655797    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:36.658011    2374 main.go:141] libmachine: Attempt 2
	I0701 12:40:36.658091    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:36.658229    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:36.658240    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:36.658248    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:36.658253    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:36.658260    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:38.660285    2374 main.go:141] libmachine: Attempt 3
	I0701 12:40:38.660313    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:38.660366    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:38.660377    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:38.660384    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:38.660388    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:38.660394    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:40.662395    2374 main.go:141] libmachine: Attempt 4
	I0701 12:40:40.662408    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:40.662436    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:40.662443    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:40.662449    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:40.662454    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:40.662459    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:42.664488    2374 main.go:141] libmachine: Attempt 5
	I0701 12:40:42.664509    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:42.664592    2374 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0701 12:40:42.664601    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:c2:16:f5:2a:29:a6 ID:1,c2:16:f5:2a:29:a6 Lease:0x64a1d296}
	I0701 12:40:42.664606    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:4a:25:c7:e6:92:de ID:1,4a:25:c7:e6:92:de Lease:0x64a1d1cd}
	I0701 12:40:42.664612    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5a:80:fc:ef:a2:6 ID:1,5a:80:fc:ef:a2:6 Lease:0x64a08041}
	I0701 12:40:42.664617    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:9f:7c:22:df:64 ID:1,a6:9f:7c:22:df:64 Lease:0x64a1d175}
	I0701 12:40:44.666673    2374 main.go:141] libmachine: Attempt 6
	I0701 12:40:44.666725    2374 main.go:141] libmachine: Searching for 86:f2:35:1a:de:c7 in /var/db/dhcpd_leases ...
	I0701 12:40:44.666875    2374 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0701 12:40:44.666898    2374 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:86:f2:35:1a:de:c7 ID:1,86:f2:35:1a:de:c7 Lease:0x64a1d2bb}
	I0701 12:40:44.666908    2374 main.go:141] libmachine: Found match: 86:f2:35:1a:de:c7
	I0701 12:40:44.666926    2374 main.go:141] libmachine: IP: 192.168.105.6
	I0701 12:40:44.666937    2374 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0701 12:40:46.688508    2374 machine.go:88] provisioning docker machine ...
	I0701 12:40:46.688590    2374 buildroot.go:166] provisioning hostname "ingress-addon-legacy-673000"
	I0701 12:40:46.688925    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:46.689932    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:46.689959    2374 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-673000 && echo "ingress-addon-legacy-673000" | sudo tee /etc/hostname
	I0701 12:40:46.788185    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-673000
	
	I0701 12:40:46.788327    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:46.788833    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:46.788853    2374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-673000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-673000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-673000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 12:40:46.868079    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 12:40:46.868108    2374 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1041/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1041/.minikube}
	I0701 12:40:46.868129    2374 buildroot.go:174] setting up certificates
	I0701 12:40:46.868142    2374 provision.go:83] configureAuth start
	I0701 12:40:46.868150    2374 provision.go:138] copyHostCerts
	I0701 12:40:46.868213    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem
	I0701 12:40:46.868309    2374 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem, removing ...
	I0701 12:40:46.868320    2374 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem
	I0701 12:40:46.868527    2374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.pem (1078 bytes)
	I0701 12:40:46.868795    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem
	I0701 12:40:46.868845    2374 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem, removing ...
	I0701 12:40:46.868851    2374 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem
	I0701 12:40:46.868961    2374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/cert.pem (1123 bytes)
	I0701 12:40:46.869095    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem
	I0701 12:40:46.869137    2374 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem, removing ...
	I0701 12:40:46.869142    2374 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem
	I0701 12:40:46.869219    2374 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1041/.minikube/key.pem (1679 bytes)
	I0701 12:40:46.869365    2374 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-673000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-673000]
	I0701 12:40:46.991798    2374 provision.go:172] copyRemoteCerts
	I0701 12:40:46.991884    2374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 12:40:46.991896    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/id_rsa Username:docker}
	I0701 12:40:47.028968    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 12:40:47.029022    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0701 12:40:47.036641    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 12:40:47.036684    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0701 12:40:47.043698    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 12:40:47.043747    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 12:40:47.050473    2374 provision.go:86] duration metric: configureAuth took 182.328333ms
	I0701 12:40:47.050481    2374 buildroot.go:189] setting minikube options for container-runtime
	I0701 12:40:47.050578    2374 config.go:182] Loaded profile config "ingress-addon-legacy-673000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0701 12:40:47.050612    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:47.050832    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:47.050839    2374 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0701 12:40:47.114879    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0701 12:40:47.114887    2374 buildroot.go:70] root file system type: tmpfs
	I0701 12:40:47.114950    2374 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0701 12:40:47.115001    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:47.115252    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:47.115295    2374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0701 12:40:47.183876    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0701 12:40:47.183930    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:47.184214    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:47.184225    2374 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0701 12:40:47.543386    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0701 12:40:47.543398    2374 machine.go:91] provisioned docker machine in 854.876167ms
	I0701 12:40:47.543404    2374 client.go:171] LocalClient.Create took 15.350620584s
	I0701 12:40:47.543420    2374 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-673000" took 15.350661417s
	I0701 12:40:47.543426    2374 start.go:300] post-start starting for "ingress-addon-legacy-673000" (driver="qemu2")
	I0701 12:40:47.543434    2374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 12:40:47.543500    2374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 12:40:47.543511    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/id_rsa Username:docker}
	I0701 12:40:47.578774    2374 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 12:40:47.580002    2374 info.go:137] Remote host: Buildroot 2021.02.12
	I0701 12:40:47.580010    2374 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/addons for local assets ...
	I0701 12:40:47.580081    2374 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1041/.minikube/files for local assets ...
	I0701 12:40:47.580186    2374 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem -> 14612.pem in /etc/ssl/certs
	I0701 12:40:47.580191    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem -> /etc/ssl/certs/14612.pem
	I0701 12:40:47.580303    2374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 12:40:47.582803    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:40:47.589829    2374 start.go:303] post-start completed in 46.398792ms
	I0701 12:40:47.590224    2374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/config.json ...
	I0701 12:40:47.590388    2374 start.go:128] duration metric: createHost completed in 15.417184959s
	I0701 12:40:47.590414    2374 main.go:141] libmachine: Using SSH client type: native
	I0701 12:40:47.590637    2374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b71100] 0x102b73b60 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0701 12:40:47.590642    2374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0701 12:40:47.655044    2374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688240447.606906294
	
	I0701 12:40:47.655050    2374 fix.go:206] guest clock: 1688240447.606906294
	I0701 12:40:47.655054    2374 fix.go:219] Guest: 2023-07-01 12:40:47.606906294 -0700 PDT Remote: 2023-07-01 12:40:47.590391 -0700 PDT m=+21.482144835 (delta=16.515294ms)
	I0701 12:40:47.655065    2374 fix.go:190] guest clock delta is within tolerance: 16.515294ms
	I0701 12:40:47.655068    2374 start.go:83] releasing machines lock for "ingress-addon-legacy-673000", held for 15.481913542s
	I0701 12:40:47.655358    2374 ssh_runner.go:195] Run: cat /version.json
	I0701 12:40:47.655366    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/id_rsa Username:docker}
	I0701 12:40:47.655376    2374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 12:40:47.655395    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/id_rsa Username:docker}
	I0701 12:40:47.690877    2374 ssh_runner.go:195] Run: systemctl --version
	I0701 12:40:47.732701    2374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0701 12:40:47.734644    2374 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0701 12:40:47.734683    2374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0701 12:40:47.738026    2374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0701 12:40:47.743231    2374 cni.go:314] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0701 12:40:47.743239    2374 start.go:466] detecting cgroup driver to use...
	I0701 12:40:47.743308    2374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:40:47.750746    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0701 12:40:47.753926    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0701 12:40:47.757114    2374 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0701 12:40:47.757136    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0701 12:40:47.760455    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:40:47.763266    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0701 12:40:47.766039    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0701 12:40:47.769195    2374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 12:40:47.772286    2374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0701 12:40:47.775208    2374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 12:40:47.777917    2374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 12:40:47.780864    2374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:47.861706    2374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0701 12:40:47.867964    2374 start.go:466] detecting cgroup driver to use...
	I0701 12:40:47.868014    2374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0701 12:40:47.874387    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:40:47.878798    2374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 12:40:47.888759    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 12:40:47.893827    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:40:47.898917    2374 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0701 12:40:47.937418    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0701 12:40:47.942849    2374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 12:40:47.948376    2374 ssh_runner.go:195] Run: which cri-dockerd
	I0701 12:40:47.949619    2374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0701 12:40:47.952390    2374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0701 12:40:47.957311    2374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0701 12:40:48.038135    2374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0701 12:40:48.125136    2374 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0701 12:40:48.125148    2374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0701 12:40:48.130623    2374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:48.209528    2374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:40:49.374936    2374 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165415083s)
	I0701 12:40:49.375023    2374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:40:49.384660    2374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0701 12:40:49.403989    2374 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0701 12:40:49.404137    2374 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0701 12:40:49.405448    2374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:40:49.409399    2374 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0701 12:40:49.409441    2374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:40:49.424715    2374 docker.go:636] Got preloaded images: 
	I0701 12:40:49.424724    2374 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0701 12:40:49.424772    2374 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:40:49.427704    2374 ssh_runner.go:195] Run: which lz4
	I0701 12:40:49.428823    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0701 12:40:49.428941    2374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0701 12:40:49.430308    2374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0701 12:40:49.430319    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0701 12:40:51.109177    2374 docker.go:600] Took 1.680335 seconds to copy over tarball
	I0701 12:40:51.109234    2374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0701 12:40:52.403918    2374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.294678959s)
	I0701 12:40:52.404015    2374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0701 12:40:52.426334    2374 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0701 12:40:52.429876    2374 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0701 12:40:52.435275    2374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 12:40:52.507495    2374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0701 12:40:54.058890    2374 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.551408875s)
	I0701 12:40:54.058984    2374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0701 12:40:54.064643    2374 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0701 12:40:54.064653    2374 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0701 12:40:54.064659    2374 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0701 12:40:54.077595    2374 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0701 12:40:54.077662    2374 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0701 12:40:54.077763    2374 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0701 12:40:54.077861    2374 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0701 12:40:54.077939    2374 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0701 12:40:54.078274    2374 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:40:54.078343    2374 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0701 12:40:54.079339    2374 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0701 12:40:54.086984    2374 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0701 12:40:54.087068    2374 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0701 12:40:54.087144    2374 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0701 12:40:54.087203    2374 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0701 12:40:54.087217    2374 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0701 12:40:54.087910    2374 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0701 12:40:54.088210    2374 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0701 12:40:54.088256    2374 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0701 12:40:55.286103    2374 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.286210    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0701 12:40:55.292419    2374 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0701 12:40:55.292446    2374 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0701 12:40:55.292493    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0701 12:40:55.298324    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0701 12:40:55.302257    2374 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.302354    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0701 12:40:55.308548    2374 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0701 12:40:55.308569    2374 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0701 12:40:55.308609    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0701 12:40:55.314780    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0701 12:40:55.337811    2374 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.337948    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0701 12:40:55.343873    2374 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0701 12:40:55.343896    2374 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0701 12:40:55.343940    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0701 12:40:55.349820    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0701 12:40:55.471378    2374 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.471472    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0701 12:40:55.477881    2374 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0701 12:40:55.477903    2374 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0701 12:40:55.477959    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0701 12:40:55.483637    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0701 12:40:55.562065    2374 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.562179    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0701 12:40:55.567865    2374 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0701 12:40:55.567887    2374 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0701 12:40:55.567942    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0701 12:40:55.583861    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0701 12:40:55.742635    2374 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:55.742757    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0701 12:40:55.748621    2374 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0701 12:40:55.748650    2374 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0701 12:40:55.748700    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0701 12:40:55.753855    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0701 12:40:55.932108    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0701 12:40:55.944277    2374 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0701 12:40:55.944320    2374 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0701 12:40:55.944401    2374 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0701 12:40:55.953159    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0701 12:40:56.586012    2374 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 12:40:56.586580    2374 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:40:56.610078    2374 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0701 12:40:56.610131    2374 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:40:56.610272    2374 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:40:56.634262    2374 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 12:40:56.634377    2374 cache_images.go:92] LoadImages completed in 2.56975925s
	W0701 12:40:56.634457    2374 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0701 12:40:56.634552    2374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0701 12:40:56.649506    2374 cni.go:84] Creating CNI manager for ""
	I0701 12:40:56.649524    2374 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:40:56.649535    2374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0701 12:40:56.649550    2374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-673000 NodeName:ingress-addon-legacy-673000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0701 12:40:56.649694    2374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-673000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 12:40:56.649773    2374 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-673000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-673000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0701 12:40:56.649854    2374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0701 12:40:56.654584    2374 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 12:40:56.654627    2374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 12:40:56.658334    2374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0701 12:40:56.664866    2374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0701 12:40:56.671060    2374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0701 12:40:56.676758    2374 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0701 12:40:56.678094    2374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 12:40:56.681935    2374 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000 for IP: 192.168.105.6
	I0701 12:40:56.681944    2374 certs.go:190] acquiring lock for shared ca certs: {Name:mk0d2f6007eea276ce17a3a9c6aca904411113ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:56.682074    2374 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key
	I0701 12:40:56.682114    2374 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key
	I0701 12:40:56.682138    2374 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key
	I0701 12:40:56.682146    2374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt with IP's: []
	I0701 12:40:56.794113    2374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt ...
	I0701 12:40:56.794118    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: {Name:mk903f17ce858944aaa9fafff8421843e1cfa6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:56.794353    2374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key ...
	I0701 12:40:56.794356    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key: {Name:mk64d95117d87586717c8d543bbf05e108d69922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:56.794482    2374 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key.b354f644
	I0701 12:40:56.794488    2374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0701 12:40:56.843680    2374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt.b354f644 ...
	I0701 12:40:56.843684    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt.b354f644: {Name:mk91bf92842b914da1cee1f05283d7c4bb41c96d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:56.843833    2374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key.b354f644 ...
	I0701 12:40:56.843836    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key.b354f644: {Name:mk2c0ccddfd98138bcf94fa1f4a04526282c1c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:56.843953    2374 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt
	I0701 12:40:56.844057    2374 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key
	I0701 12:40:56.844137    2374 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.key
	I0701 12:40:56.844145    2374 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.crt with IP's: []
	I0701 12:40:57.083828    2374 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.crt ...
	I0701 12:40:57.083841    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.crt: {Name:mk59a3c0c8748d79055b11f3dea91fa3499b333b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:57.084157    2374 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.key ...
	I0701 12:40:57.084160    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.key: {Name:mkbd2fdbeea9e00f8dbfb025e626f0d0b8b0b0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:40:57.084271    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 12:40:57.084293    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 12:40:57.084305    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 12:40:57.084318    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 12:40:57.084331    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 12:40:57.084344    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 12:40:57.084356    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 12:40:57.084368    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 12:40:57.084442    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem (1338 bytes)
	W0701 12:40:57.084479    2374 certs.go:433] ignoring /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461_empty.pem, impossibly tiny 0 bytes
	I0701 12:40:57.084487    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca-key.pem (1679 bytes)
	I0701 12:40:57.084508    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem (1078 bytes)
	I0701 12:40:57.084528    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem (1123 bytes)
	I0701 12:40:57.084547    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/certs/key.pem (1679 bytes)
	I0701 12:40:57.084591    2374 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem (1708 bytes)
	I0701 12:40:57.084612    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem -> /usr/share/ca-certificates/14612.pem
	I0701 12:40:57.084622    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:57.084632    2374 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem -> /usr/share/ca-certificates/1461.pem
	I0701 12:40:57.084956    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0701 12:40:57.093371    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 12:40:57.100755    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 12:40:57.108302    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 12:40:57.115045    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 12:40:57.121742    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 12:40:57.129176    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 12:40:57.136620    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 12:40:57.143778    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/ssl/certs/14612.pem --> /usr/share/ca-certificates/14612.pem (1708 bytes)
	I0701 12:40:57.150386    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 12:40:57.157330    2374 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/1461.pem --> /usr/share/ca-certificates/1461.pem (1338 bytes)
	I0701 12:40:57.164373    2374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 12:40:57.169256    2374 ssh_runner.go:195] Run: openssl version
	I0701 12:40:57.171190    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 12:40:57.174044    2374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:57.175651    2374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  1 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:57.175677    2374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 12:40:57.177333    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 12:40:57.180492    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1461.pem && ln -fs /usr/share/ca-certificates/1461.pem /etc/ssl/certs/1461.pem"
	I0701 12:40:57.183247    2374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1461.pem
	I0701 12:40:57.184617    2374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  1 19:36 /usr/share/ca-certificates/1461.pem
	I0701 12:40:57.184635    2374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1461.pem
	I0701 12:40:57.186413    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1461.pem /etc/ssl/certs/51391683.0"
	I0701 12:40:57.189528    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14612.pem && ln -fs /usr/share/ca-certificates/14612.pem /etc/ssl/certs/14612.pem"
	I0701 12:40:57.192848    2374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14612.pem
	I0701 12:40:57.194279    2374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  1 19:36 /usr/share/ca-certificates/14612.pem
	I0701 12:40:57.194302    2374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14612.pem
	I0701 12:40:57.196154    2374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14612.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 12:40:57.199055    2374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0701 12:40:57.200324    2374 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0701 12:40:57.200356    2374 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-673000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-673000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:40:57.200422    2374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0701 12:40:57.205880    2374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 12:40:57.209047    2374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 12:40:57.212041    2374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 12:40:57.214604    2374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 12:40:57.214617    2374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0701 12:40:57.239863    2374 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0701 12:40:57.239888    2374 kubeadm.go:322] [preflight] Running pre-flight checks
	I0701 12:40:57.327676    2374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 12:40:57.327731    2374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 12:40:57.327776    2374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 12:40:57.373065    2374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 12:40:57.374845    2374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 12:40:57.374867    2374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0701 12:40:57.455025    2374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 12:40:57.465226    2374 out.go:204]   - Generating certificates and keys ...
	I0701 12:40:57.465284    2374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0701 12:40:57.465316    2374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0701 12:40:57.568079    2374 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0701 12:40:57.749664    2374 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0701 12:40:57.809047    2374 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0701 12:40:57.859518    2374 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0701 12:40:58.066171    2374 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0701 12:40:58.066274    2374 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-673000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0701 12:40:58.210722    2374 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0701 12:40:58.210868    2374 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-673000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0701 12:40:58.269797    2374 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0701 12:40:58.471889    2374 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0701 12:40:58.535673    2374 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0701 12:40:58.535784    2374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 12:40:58.577760    2374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 12:40:58.893122    2374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 12:40:58.942816    2374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 12:40:58.975354    2374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 12:40:58.975784    2374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 12:40:58.980049    2374 out.go:204]   - Booting up control plane ...
	I0701 12:40:58.980149    2374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 12:40:58.980195    2374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 12:40:58.980467    2374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 12:40:58.980510    2374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 12:40:58.982150    2374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0701 12:41:10.987177    2374 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.004483 seconds
	I0701 12:41:10.987421    2374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 12:41:11.014053    2374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 12:41:11.545189    2374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 12:41:11.545395    2374 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-673000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0701 12:41:12.053393    2374 kubeadm.go:322] [bootstrap-token] Using token: 734uej.3stjl2libcgojwvl
	I0701 12:41:12.059589    2374 out.go:204]   - Configuring RBAC rules ...
	I0701 12:41:12.059687    2374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 12:41:12.059788    2374 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 12:41:12.063840    2374 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 12:41:12.067869    2374 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 12:41:12.069975    2374 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 12:41:12.071484    2374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 12:41:12.079858    2374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 12:41:12.312755    2374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0701 12:41:12.459134    2374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0701 12:41:12.459700    2374 kubeadm.go:322] 
	I0701 12:41:12.459736    2374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0701 12:41:12.459740    2374 kubeadm.go:322] 
	I0701 12:41:12.459780    2374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0701 12:41:12.459792    2374 kubeadm.go:322] 
	I0701 12:41:12.459811    2374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0701 12:41:12.459866    2374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 12:41:12.459897    2374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 12:41:12.459902    2374 kubeadm.go:322] 
	I0701 12:41:12.459925    2374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0701 12:41:12.459976    2374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 12:41:12.460023    2374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 12:41:12.460027    2374 kubeadm.go:322] 
	I0701 12:41:12.460074    2374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 12:41:12.460115    2374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0701 12:41:12.460120    2374 kubeadm.go:322] 
	I0701 12:41:12.460162    2374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 734uej.3stjl2libcgojwvl \
	I0701 12:41:12.460223    2374 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 \
	I0701 12:41:12.460240    2374 kubeadm.go:322]     --control-plane 
	I0701 12:41:12.460245    2374 kubeadm.go:322] 
	I0701 12:41:12.460292    2374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0701 12:41:12.460296    2374 kubeadm.go:322] 
	I0701 12:41:12.460343    2374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 734uej.3stjl2libcgojwvl \
	I0701 12:41:12.460406    2374 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:46e6b689074307837292321246b5000df1ddfdde72c2b1da038f680c54d9d678 
	I0701 12:41:12.460630    2374 kubeadm.go:322] W0701 19:40:57.191848    1406 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0701 12:41:12.460741    2374 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0701 12:41:12.460809    2374 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0701 12:41:12.460876    2374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 12:41:12.460955    2374 kubeadm.go:322] W0701 19:40:58.931405    1406 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0701 12:41:12.461024    2374 kubeadm.go:322] W0701 19:40:58.932140    1406 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0701 12:41:12.461030    2374 cni.go:84] Creating CNI manager for ""
	I0701 12:41:12.461038    2374 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:41:12.461049    2374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 12:41:12.461126    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:12.461127    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2 minikube.k8s.io/name=ingress-addon-legacy-673000 minikube.k8s.io/updated_at=2023_07_01T12_41_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:12.539371    2374 ops.go:34] apiserver oom_adj: -16
	I0701 12:41:12.539435    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:13.075560    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:13.575806    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:14.075723    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:14.575893    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:15.075812    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:15.575723    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:16.075770    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:16.575734    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:17.074535    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:17.575717    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:18.075674    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:18.575667    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:19.075606    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:19.575764    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:20.075624    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:20.575655    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:21.075672    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:21.575679    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:22.075651    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:22.575574    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:23.075675    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:23.575604    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:24.075614    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:24.575555    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:25.075518    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:25.573371    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:26.075578    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:26.575320    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:27.075568    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:27.575289    2374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 12:41:27.630268    2374 kubeadm.go:1081] duration metric: took 15.169492875s to wait for elevateKubeSystemPrivileges.
	I0701 12:41:27.630285    2374 kubeadm.go:406] StartCluster complete in 30.430503458s
	I0701 12:41:27.630294    2374 settings.go:142] acquiring lock: {Name:mk1853b69cc489034eba1c68e94bf3f8bc0ceb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:41:27.630380    2374 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:41:27.630793    2374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/kubeconfig: {Name:mk6d6ec6f258eefdfd78eed77d0a2eac619f380e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:41:27.630996    2374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 12:41:27.631013    2374 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0701 12:41:27.631061    2374 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-673000"
	I0701 12:41:27.631071    2374 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-673000"
	I0701 12:41:27.631077    2374 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-673000"
	I0701 12:41:27.631086    2374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-673000"
	I0701 12:41:27.631102    2374 host.go:66] Checking if "ingress-addon-legacy-673000" exists ...
	I0701 12:41:27.631495    2374 kapi.go:59] client config for ingress-addon-legacy-673000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bcdc60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:41:27.631718    2374 config.go:182] Loaded profile config "ingress-addon-legacy-673000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	W0701 12:41:27.631895    2374 host.go:54] host status for "ingress-addon-legacy-673000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/monitor: connect: connection refused
	W0701 12:41:27.631907    2374 addons.go:274] "ingress-addon-legacy-673000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0701 12:41:27.631909    2374 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 12:41:27.632557    2374 kapi.go:59] client config for ingress-addon-legacy-673000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bcdc60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:41:27.637362    2374 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-673000"
	I0701 12:41:27.637377    2374 host.go:66] Checking if "ingress-addon-legacy-673000" exists ...
	I0701 12:41:27.638137    2374 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 12:41:27.638143    2374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 12:41:27.638153    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/ingress-addon-legacy-673000/id_rsa Username:docker}
	I0701 12:41:27.675986    2374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 12:41:27.680941    2374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 12:41:27.808971    2374 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0701 12:41:27.851292    2374 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0701 12:41:27.861286    2374 addons.go:499] enable addons completed in 230.26825ms: enabled=[storage-provisioner default-storageclass]
	I0701 12:41:28.144118    2374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-673000" context rescaled to 1 replicas
	I0701 12:41:28.144150    2374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:41:28.151075    2374 out.go:177] * Verifying Kubernetes components...
	I0701 12:41:28.155025    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:41:28.193715    2374 kapi.go:59] client config for ingress-addon-legacy-673000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bcdc60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 12:41:28.193882    2374 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-673000" to be "Ready" ...
	I0701 12:41:28.195986    2374 node_ready.go:49] node "ingress-addon-legacy-673000" has status "Ready":"True"
	I0701 12:41:28.195992    2374 node_ready.go:38] duration metric: took 2.09775ms waiting for node "ingress-addon-legacy-673000" to be "Ready" ...
	I0701 12:41:28.195996    2374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:41:28.200248    2374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace to be "Ready" ...
	I0701 12:41:30.217728    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:32.717316    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:35.213603    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:37.217040    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:39.217952    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:41.716118    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:43.719652    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:46.217013    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:48.218213    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:50.218510    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:52.717281    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:54.718340    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:57.217829    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:41:59.717651    2374 pod_ready.go:102] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"False"
	I0701 12:42:02.209530    2374 pod_ready.go:92] pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.209547    2374 pod_ready.go:81] duration metric: took 34.009931875s waiting for pod "coredns-66bff467f8-7f2mv" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.209556    2374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.212605    2374 pod_ready.go:92] pod "etcd-ingress-addon-legacy-673000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.212612    2374 pod_ready.go:81] duration metric: took 3.051833ms waiting for pod "etcd-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.212618    2374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.215729    2374 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-673000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.215734    2374 pod_ready.go:81] duration metric: took 3.112375ms waiting for pod "kube-apiserver-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.215739    2374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.218353    2374 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-673000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.218360    2374 pod_ready.go:81] duration metric: took 2.616875ms waiting for pod "kube-controller-manager-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.218369    2374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bm78v" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.220799    2374 pod_ready.go:92] pod "kube-proxy-bm78v" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.220803    2374 pod_ready.go:81] duration metric: took 2.430333ms waiting for pod "kube-proxy-bm78v" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.220807    2374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.407176    2374 request.go:628] Waited for 186.310208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-673000
	I0701 12:42:02.607178    2374 request.go:628] Waited for 196.476291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-673000
	I0701 12:42:02.613675    2374 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-673000" in "kube-system" namespace has status "Ready":"True"
	I0701 12:42:02.613710    2374 pod_ready.go:81] duration metric: took 392.901541ms waiting for pod "kube-scheduler-ingress-addon-legacy-673000" in "kube-system" namespace to be "Ready" ...
	I0701 12:42:02.613735    2374 pod_ready.go:38] duration metric: took 34.418381083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 12:42:02.613794    2374 api_server.go:52] waiting for apiserver process to appear ...
	I0701 12:42:02.614141    2374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 12:42:02.630697    2374 api_server.go:72] duration metric: took 34.487173375s to wait for apiserver process to appear ...
	I0701 12:42:02.630725    2374 api_server.go:88] waiting for apiserver healthz status ...
	I0701 12:42:02.630752    2374 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0701 12:42:02.639648    2374 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0701 12:42:02.640780    2374 api_server.go:141] control plane version: v1.18.20
	I0701 12:42:02.640803    2374 api_server.go:131] duration metric: took 10.070958ms to wait for apiserver health ...
	I0701 12:42:02.640811    2374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 12:42:02.807218    2374 request.go:628] Waited for 166.290708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0701 12:42:02.819967    2374 system_pods.go:59] 6 kube-system pods found
	I0701 12:42:02.820014    2374 system_pods.go:61] "coredns-66bff467f8-7f2mv" [3116a0ff-bd9e-400a-9b45-48815f7e70e0] Running
	I0701 12:42:02.820026    2374 system_pods.go:61] "etcd-ingress-addon-legacy-673000" [cf4cea9f-2edf-47db-829e-f4ec3315ec3c] Running
	I0701 12:42:02.820038    2374 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-673000" [29dfaea0-4ba7-46e1-9533-edd4b35d5447] Running
	I0701 12:42:02.820050    2374 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-673000" [ae97c12a-9c82-461b-9f72-47d0c2075ac7] Running
	I0701 12:42:02.820058    2374 system_pods.go:61] "kube-proxy-bm78v" [7378b4d8-0f04-4596-8a71-b0b089e82284] Running
	I0701 12:42:02.820070    2374 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-673000" [9b155220-e8d5-44c1-82f1-b24747ca9e45] Running
	I0701 12:42:02.820083    2374 system_pods.go:74] duration metric: took 179.265875ms to wait for pod list to return data ...
	I0701 12:42:02.820105    2374 default_sa.go:34] waiting for default service account to be created ...
	I0701 12:42:03.007201    2374 request.go:628] Waited for 186.9435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0701 12:42:03.014755    2374 default_sa.go:45] found service account: "default"
	I0701 12:42:03.014793    2374 default_sa.go:55] duration metric: took 194.676958ms for default service account to be created ...
	I0701 12:42:03.014812    2374 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 12:42:03.205624    2374 request.go:628] Waited for 190.6815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0701 12:42:03.218158    2374 system_pods.go:86] 6 kube-system pods found
	I0701 12:42:03.218190    2374 system_pods.go:89] "coredns-66bff467f8-7f2mv" [3116a0ff-bd9e-400a-9b45-48815f7e70e0] Running
	I0701 12:42:03.218202    2374 system_pods.go:89] "etcd-ingress-addon-legacy-673000" [cf4cea9f-2edf-47db-829e-f4ec3315ec3c] Running
	I0701 12:42:03.218212    2374 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-673000" [29dfaea0-4ba7-46e1-9533-edd4b35d5447] Running
	I0701 12:42:03.218223    2374 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-673000" [ae97c12a-9c82-461b-9f72-47d0c2075ac7] Running
	I0701 12:42:03.218236    2374 system_pods.go:89] "kube-proxy-bm78v" [7378b4d8-0f04-4596-8a71-b0b089e82284] Running
	I0701 12:42:03.218248    2374 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-673000" [9b155220-e8d5-44c1-82f1-b24747ca9e45] Running
	I0701 12:42:03.218264    2374 system_pods.go:126] duration metric: took 203.439ms to wait for k8s-apps to be running ...
	I0701 12:42:03.218277    2374 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 12:42:03.218466    2374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 12:42:03.234382    2374 system_svc.go:56] duration metric: took 16.099333ms WaitForService to wait for kubelet.
	I0701 12:42:03.234406    2374 kubeadm.go:581] duration metric: took 35.090896375s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0701 12:42:03.234428    2374 node_conditions.go:102] verifying NodePressure condition ...
	I0701 12:42:03.407216    2374 request.go:628] Waited for 172.681292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0701 12:42:03.416767    2374 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0701 12:42:03.416827    2374 node_conditions.go:123] node cpu capacity is 2
	I0701 12:42:03.416863    2374 node_conditions.go:105] duration metric: took 182.429ms to run NodePressure ...
	I0701 12:42:03.416891    2374 start.go:228] waiting for startup goroutines ...
	I0701 12:42:03.416909    2374 start.go:233] waiting for cluster config update ...
	I0701 12:42:03.416943    2374 start.go:242] writing updated cluster config ...
	I0701 12:42:03.422959    2374 ssh_runner.go:195] Run: rm -f paused
	I0701 12:42:03.485227    2374 start.go:642] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0701 12:42:03.490474    2374 out.go:177] 
	W0701 12:42:03.494416    2374 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0701 12:42:03.498379    2374 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0701 12:42:03.506446    2374 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-673000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-07-01 19:40:43 UTC, ends at Sat 2023-07-01 19:43:16 UTC. --
	Jul 01 19:42:52 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:42:52.782536228Z" level=warning msg="cleaning up after shim disconnected" id=bb8239c6424fe6e1adfe170452a615409e0a344c6566f0b7367674ac58945d50 namespace=moby
	Jul 01 19:42:52 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:42:52.782540270Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.735524374Z" level=info msg="shim disconnected" id=da9bd080e02290a7191c0fb6a51901ca83eccdddb8526d248b4f90a8dd9b9e5c namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.735937445Z" level=warning msg="cleaning up after shim disconnected" id=da9bd080e02290a7191c0fb6a51901ca83eccdddb8526d248b4f90a8dd9b9e5c namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.735961778Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:06.736075650Z" level=info msg="ignoring event" container=da9bd080e02290a7191c0fb6a51901ca83eccdddb8526d248b4f90a8dd9b9e5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.740751807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.740804597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.740817763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.740827805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:06.791494512Z" level=info msg="ignoring event" container=b27686725aab75026664f03b3551c88e3c8ccd2f02e17e2fe25e232f3a62423a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.791803836Z" level=info msg="shim disconnected" id=b27686725aab75026664f03b3551c88e3c8ccd2f02e17e2fe25e232f3a62423a namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.791834627Z" level=warning msg="cleaning up after shim disconnected" id=b27686725aab75026664f03b3551c88e3c8ccd2f02e17e2fe25e232f3a62423a namespace=moby
	Jul 01 19:43:06 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:06.791838877Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:11.139810222Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e4528b4d75aa804e018018c3a8babf85b3e6ecd7d0136c061fb1af1b14295220
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:11.145367500Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e4528b4d75aa804e018018c3a8babf85b3e6ecd7d0136c061fb1af1b14295220
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.235396795Z" level=info msg="shim disconnected" id=e4528b4d75aa804e018018c3a8babf85b3e6ecd7d0136c061fb1af1b14295220 namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:11.235578415Z" level=info msg="ignoring event" container=e4528b4d75aa804e018018c3a8babf85b3e6ecd7d0136c061fb1af1b14295220 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.236498517Z" level=warning msg="cleaning up after shim disconnected" id=e4528b4d75aa804e018018c3a8babf85b3e6ecd7d0136c061fb1af1b14295220 namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.236523183Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1084]: time="2023-07-01T19:43:11.275947616Z" level=info msg="ignoring event" container=dd453fb5ffe875813384f17c8ff51c42f237dac588adbbe409e2af0508408101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.276043489Z" level=info msg="shim disconnected" id=dd453fb5ffe875813384f17c8ff51c42f237dac588adbbe409e2af0508408101 namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.276074821Z" level=warning msg="cleaning up after shim disconnected" id=dd453fb5ffe875813384f17c8ff51c42f237dac588adbbe409e2af0508408101 namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.276079821Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 01 19:43:11 ingress-addon-legacy-673000 dockerd[1093]: time="2023-07-01T19:43:11.280698372Z" level=warning msg="cleanup warnings time=\"2023-07-01T19:43:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	b27686725aab7       13753a81eccfd                                                                                                      10 seconds ago       Exited              hello-world-app           2                   189105d07dde1
	08b73cfedfd1c       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                      34 seconds ago       Running             nginx                     0                   dae5bcdf7fad4
	e4528b4d75aa8       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   dd453fb5ffe87
	c40b98768722c       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   f22a5eb377176
	fea27b340b346       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   152df575c139f
	33a5320c4cbd6       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   364cde0099581
	e65854d35f77a       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   0b424f56182b7
	0489d85335bd8       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   ba1ff2c4465ba
	b21d5b637cb36       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   a81439c26579b
	90e0e1d84fd7e       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   d4c77ba2a5c3e
	21e099407e1a4       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   85a57299d0fdd
	
	* 
	* ==> coredns [33a5320c4cbd] <==
	* [INFO] 172.17.0.1:6762 - 38775 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047042s
	[INFO] 172.17.0.1:6762 - 62314 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026959s
	[INFO] 172.17.0.1:6762 - 186 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025917s
	[INFO] 172.17.0.1:6762 - 29624 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035083s
	[INFO] 172.17.0.1:63767 - 41397 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001725s
	[INFO] 172.17.0.1:63767 - 3275 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012167s
	[INFO] 172.17.0.1:63767 - 1644 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013959s
	[INFO] 172.17.0.1:63767 - 46299 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011875s
	[INFO] 172.17.0.1:63767 - 54167 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001075s
	[INFO] 172.17.0.1:63767 - 43430 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013625s
	[INFO] 172.17.0.1:63767 - 6728 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0001s
	[INFO] 172.17.0.1:26136 - 11769 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004s
	[INFO] 172.17.0.1:6206 - 59827 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000027458s
	[INFO] 172.17.0.1:26136 - 57513 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000025916s
	[INFO] 172.17.0.1:26136 - 16197 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008792s
	[INFO] 172.17.0.1:26136 - 7189 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008958s
	[INFO] 172.17.0.1:26136 - 945 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009708s
	[INFO] 172.17.0.1:26136 - 43244 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010167s
	[INFO] 172.17.0.1:26136 - 497 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009959s
	[INFO] 172.17.0.1:6206 - 36535 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018333s
	[INFO] 172.17.0.1:6206 - 3521 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012084s
	[INFO] 172.17.0.1:6206 - 65444 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001275s
	[INFO] 172.17.0.1:6206 - 16132 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008958s
	[INFO] 172.17.0.1:6206 - 38881 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011208s
	[INFO] 172.17.0.1:6206 - 20649 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00001525s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-673000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-673000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2455319192314a5b3ac0f7b56253e90d3c5c74c2
	                    minikube.k8s.io/name=ingress-addon-legacy-673000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_01T12_41_12_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Jul 2023 19:41:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-673000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Jul 2023 19:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Jul 2023 19:42:48 +0000   Sat, 01 Jul 2023 19:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Jul 2023 19:42:48 +0000   Sat, 01 Jul 2023 19:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Jul 2023 19:42:48 +0000   Sat, 01 Jul 2023 19:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Jul 2023 19:42:48 +0000   Sat, 01 Jul 2023 19:41:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-673000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 2139f9534de54d8a816ddba17bb09d39
	  System UUID:                2139f9534de54d8a816ddba17bb09d39
	  Boot ID:                    435ca913-1494-4438-bac9-1e682bd71ab1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-t9zwl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 coredns-66bff467f8-7f2mv                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     109s
	  kube-system                 etcd-ingress-addon-legacy-673000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-ingress-addon-legacy-673000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-673000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-bm78v                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-scheduler-ingress-addon-legacy-673000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m10s (x5 over 2m10s)  kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x4 over 2m10s)  kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x4 over 2m10s)  kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  118s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s                   kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                   kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                   kubelet     Node ingress-addon-legacy-673000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                118s                   kubelet     Node ingress-addon-legacy-673000 status is now: NodeReady
	  Normal  Starting                 108s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul 1 19:40] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.651213] EINJ: EINJ table not found.
	[  +0.523040] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043882] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000894] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.211582] systemd-fstab-generator[484]: Ignoring "noauto" for root device
	[  +0.081550] systemd-fstab-generator[495]: Ignoring "noauto" for root device
	[  +0.445899] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.175125] systemd-fstab-generator[830]: Ignoring "noauto" for root device
	[  +0.086856] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +0.084277] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +4.298609] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +1.532951] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.411295] systemd-fstab-generator[1525]: Ignoring "noauto" for root device
	[Jul 1 19:41] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.078735] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.030109] systemd-fstab-generator[2621]: Ignoring "noauto" for root device
	[ +16.074128] kauditd_printk_skb: 7 callbacks suppressed
	[Jul 1 19:42] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.809249] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +36.497061] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [0489d85335bd] <==
	* raft2023/07/01 19:41:07 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/07/01 19:41:07 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/01 19:41:07 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/07/01 19:41:07 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-01 19:41:07.304463 W | auth: simple token is not cryptographically signed
	2023-07-01 19:41:07.312331 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-01 19:41:07.314133 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/01 19:41:07 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-01 19:41:07.314509 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-07-01 19:41:07.314941 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-01 19:41:07.314999 I | embed: listening for peers on 192.168.105.6:2380
	2023-07-01 19:41:07.315143 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/07/01 19:41:08 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/07/01 19:41:08 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/07/01 19:41:08 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/07/01 19:41:08 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/07/01 19:41:08 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-07-01 19:41:08.042324 I | etcdserver: published {Name:ingress-addon-legacy-673000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-07-01 19:41:08.042366 I | embed: ready to serve client requests
	2023-07-01 19:41:08.043403 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-01 19:41:08.044422 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-01 19:41:08.045003 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-01 19:41:08.045218 I | embed: ready to serve client requests
	2023-07-01 19:41:08.045879 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-01 19:41:08.049772 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  19:43:16 up 2 min,  0 users,  load average: 0.38, 0.22, 0.09
	Linux ingress-addon-legacy-673000 5.10.57 #1 SMP PREEMPT Thu Jun 22 18:49:06 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [21e099407e1a] <==
	* I0701 19:41:09.569478       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0701 19:41:09.626184       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 19:41:09.626713       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 19:41:09.635116       1 cache.go:39] Caches are synced for autoregister controller
	I0701 19:41:09.653215       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0701 19:41:09.673911       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0701 19:41:10.525928       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0701 19:41:10.526453       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0701 19:41:10.537247       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0701 19:41:10.543684       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0701 19:41:10.543849       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0701 19:41:10.684251       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0701 19:41:10.694973       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0701 19:41:10.709273       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0701 19:41:10.709600       1 controller.go:609] quota admission added evaluator for: endpoints
	I0701 19:41:10.710898       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0701 19:41:11.855750       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0701 19:41:12.238461       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0701 19:41:12.405101       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0701 19:41:18.642412       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 19:41:27.788171       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0701 19:41:27.820044       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0701 19:42:03.754786       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0701 19:42:39.149374       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0701 19:43:09.141103       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [90e0e1d84fd7] <==
	* I0701 19:41:27.818228       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0701 19:41:27.821067       1 shared_informer.go:230] Caches are synced for resource quota 
	I0701 19:41:27.823642       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6147a33d-5344-4ad0-8ebf-dc6ec78ab745", APIVersion:"apps/v1", ResourceVersion:"210", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-bm78v
	I0701 19:41:27.829338       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0701 19:41:27.829348       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0701 19:41:27.830015       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6147a33d-5344-4ad0-8ebf-dc6ec78ab745", ResourceVersion:"210", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63823837272, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000eb04a0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000eb04c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000eb04e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001438fc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000eb0500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000eb0520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000eb0560)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40000b3c20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000a1cdf8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005eed90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400199c6d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000a1ce88)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0701 19:41:27.850020       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0701 19:41:27.851969       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"055fc627-e726-427f-b692-38c964d49ea9", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7f2mv
	I0701 19:41:27.853956       1 shared_informer.go:230] Caches are synced for attach detach 
	I0701 19:41:27.864222       1 shared_informer.go:230] Caches are synced for resource quota 
	I0701 19:41:27.865882       1 shared_informer.go:230] Caches are synced for GC 
	I0701 19:41:27.867677       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0701 19:41:27.885093       1 shared_informer.go:230] Caches are synced for expand 
	I0701 19:41:27.904776       1 shared_informer.go:230] Caches are synced for PV protection 
	I0701 19:41:27.913556       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0701 19:41:27.918565       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0701 19:42:03.746237       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b1fc3741-f346-44f3-8dd7-8704c7189022", APIVersion:"apps/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0701 19:42:03.757306       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9bc922db-eb8b-400a-a131-9d857700f00b", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-pckrs
	I0701 19:42:03.779036       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6ffb7663-a0f5-4f2e-8c6d-e01809e646be", APIVersion:"batch/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8nkzg
	I0701 19:42:03.794426       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"1fa5d0ce-06d2-4dbd-b172-3c440452f48f", APIVersion:"batch/v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-fmd6m
	I0701 19:42:08.241937       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6ffb7663-a0f5-4f2e-8c6d-e01809e646be", APIVersion:"batch/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0701 19:42:09.264238       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"1fa5d0ce-06d2-4dbd-b172-3c440452f48f", APIVersion:"batch/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0701 19:42:49.429978       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f1757cc4-783b-403d-891a-aac8f7abd950", APIVersion:"apps/v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0701 19:42:49.441650       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"f110446e-0f83-469d-989b-367c697c9eb3", APIVersion:"apps/v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-t9zwl
	E0701 19:43:13.877190       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-cr89t" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [e65854d35f77] <==
	* W0701 19:41:28.381937       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0701 19:41:28.385804       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0701 19:41:28.385827       1 server_others.go:186] Using iptables Proxier.
	I0701 19:41:28.386044       1 server.go:583] Version: v1.18.20
	I0701 19:41:28.388325       1 config.go:315] Starting service config controller
	I0701 19:41:28.388370       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0701 19:41:28.388436       1 config.go:133] Starting endpoints config controller
	I0701 19:41:28.388459       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0701 19:41:28.488507       1 shared_informer.go:230] Caches are synced for service config 
	I0701 19:41:28.488577       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b21d5b637cb3] <==
	* W0701 19:41:09.592118       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0701 19:41:09.592137       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0701 19:41:09.597392       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0701 19:41:09.597415       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0701 19:41:09.598950       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0701 19:41:09.599630       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 19:41:09.599640       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 19:41:09.599680       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0701 19:41:09.602017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 19:41:09.602094       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 19:41:09.602151       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 19:41:09.602199       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 19:41:09.604121       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 19:41:09.604201       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 19:41:09.604280       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 19:41:09.604345       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 19:41:09.604396       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 19:41:09.604457       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 19:41:09.604569       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 19:41:09.604595       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 19:41:10.465947       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 19:41:10.508986       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 19:41:10.612210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 19:41:10.642742       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0701 19:41:11.100562       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-07-01 19:40:43 UTC, ends at Sat 2023-07-01 19:43:16 UTC. --
	Jul 01 19:42:54 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:42:54.765160    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb8239c6424fe6e1adfe170452a615409e0a344c6566f0b7367674ac58945d50
	Jul 01 19:42:54 ingress-addon-legacy-673000 kubelet[2627]: E0701 19:42:54.765501    2627 pod_workers.go:191] Error syncing pod d96f47c7-2f7b-4e04-a933-36ebbfb58dbf ("hello-world-app-5f5d8b66bb-t9zwl_default(d96f47c7-2f7b-4e04-a933-36ebbfb58dbf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-t9zwl_default(d96f47c7-2f7b-4e04-a933-36ebbfb58dbf)"
	Jul 01 19:42:56 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:42:56.661238    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fdb1958afcae7b803988ec467f04acd44d38f5116eebfbcec386a906291c4f99
	Jul 01 19:42:56 ingress-addon-legacy-673000 kubelet[2627]: E0701 19:42:56.662086    2627 pod_workers.go:191] Error syncing pod a84d2624-5377-4781-86ee-605e010894c9 ("kube-ingress-dns-minikube_kube-system(a84d2624-5377-4781-86ee-605e010894c9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(a84d2624-5377-4781-86ee-605e010894c9)"
	Jul 01 19:43:04 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:04.914495    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-664mq" (UniqueName: "kubernetes.io/secret/a84d2624-5377-4781-86ee-605e010894c9-minikube-ingress-dns-token-664mq") pod "a84d2624-5377-4781-86ee-605e010894c9" (UID: "a84d2624-5377-4781-86ee-605e010894c9")
	Jul 01 19:43:04 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:04.922473    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d2624-5377-4781-86ee-605e010894c9-minikube-ingress-dns-token-664mq" (OuterVolumeSpecName: "minikube-ingress-dns-token-664mq") pod "a84d2624-5377-4781-86ee-605e010894c9" (UID: "a84d2624-5377-4781-86ee-605e010894c9"). InnerVolumeSpecName "minikube-ingress-dns-token-664mq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 01 19:43:05 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:05.015927    2627 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-664mq" (UniqueName: "kubernetes.io/secret/a84d2624-5377-4781-86ee-605e010894c9-minikube-ingress-dns-token-664mq") on node "ingress-addon-legacy-673000" DevicePath ""
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:06.660373    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb8239c6424fe6e1adfe170452a615409e0a344c6566f0b7367674ac58945d50
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: W0701 19:43:06.804685    2627 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podd96f47c7-2f7b-4e04-a933-36ebbfb58dbf/b27686725aab75026664f03b3551c88e3c8ccd2f02e17e2fe25e232f3a62423a": none of the resources are being tracked.
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:06.953228    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fdb1958afcae7b803988ec467f04acd44d38f5116eebfbcec386a906291c4f99
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: W0701 19:43:06.955555    2627 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-t9zwl through plugin: invalid network status for
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:06.960140    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b27686725aab75026664f03b3551c88e3c8ccd2f02e17e2fe25e232f3a62423a
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: E0701 19:43:06.960340    2627 pod_workers.go:191] Error syncing pod d96f47c7-2f7b-4e04-a933-36ebbfb58dbf ("hello-world-app-5f5d8b66bb-t9zwl_default(d96f47c7-2f7b-4e04-a933-36ebbfb58dbf)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-t9zwl_default(d96f47c7-2f7b-4e04-a933-36ebbfb58dbf)"
	Jul 01 19:43:06 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:06.967019    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb8239c6424fe6e1adfe170452a615409e0a344c6566f0b7367674ac58945d50
	Jul 01 19:43:07 ingress-addon-legacy-673000 kubelet[2627]: W0701 19:43:07.981234    2627 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-t9zwl through plugin: invalid network status for
	Jul 01 19:43:09 ingress-addon-legacy-673000 kubelet[2627]: E0701 19:43:09.131723    2627 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pckrs.176dd5df02814a4d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pckrs", UID:"2fc34171-d95d-4518-adb4-bd58dc38402a", APIVersion:"v1", ResourceVersion:"417", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-673000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1203e5347c3884d, ext:116925946162, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1203e5347c3884d, ext:116925946162, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pckrs.176dd5df02814a4d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 01 19:43:09 ingress-addon-legacy-673000 kubelet[2627]: E0701 19:43:09.140758    2627 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pckrs.176dd5df02814a4d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pckrs", UID:"2fc34171-d95d-4518-adb4-bd58dc38402a", APIVersion:"v1", ResourceVersion:"417", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-673000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1203e5347c3884d, ext:116925946162, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1203e534828b6c6, ext:116932577195, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pckrs.176dd5df02814a4d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 01 19:43:12 ingress-addon-legacy-673000 kubelet[2627]: W0701 19:43:12.048066    2627 pod_container_deletor.go:77] Container "dd453fb5ffe875813384f17c8ff51c42f237dac588adbbe409e2af0508408101" not found in pod's containers
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.339556    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-ssp7d" (UniqueName: "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-ingress-nginx-token-ssp7d") pod "2fc34171-d95d-4518-adb4-bd58dc38402a" (UID: "2fc34171-d95d-4518-adb4-bd58dc38402a")
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.339645    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-webhook-cert") pod "2fc34171-d95d-4518-adb4-bd58dc38402a" (UID: "2fc34171-d95d-4518-adb4-bd58dc38402a")
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.346129    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2fc34171-d95d-4518-adb4-bd58dc38402a" (UID: "2fc34171-d95d-4518-adb4-bd58dc38402a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.347516    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-ingress-nginx-token-ssp7d" (OuterVolumeSpecName: "ingress-nginx-token-ssp7d") pod "2fc34171-d95d-4518-adb4-bd58dc38402a" (UID: "2fc34171-d95d-4518-adb4-bd58dc38402a"). InnerVolumeSpecName "ingress-nginx-token-ssp7d". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.440085    2627 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-webhook-cert") on node "ingress-addon-legacy-673000" DevicePath ""
	Jul 01 19:43:13 ingress-addon-legacy-673000 kubelet[2627]: I0701 19:43:13.440188    2627 reconciler.go:319] Volume detached for volume "ingress-nginx-token-ssp7d" (UniqueName: "kubernetes.io/secret/2fc34171-d95d-4518-adb4-bd58dc38402a-ingress-nginx-token-ssp7d") on node "ingress-addon-legacy-673000" DevicePath ""
	Jul 01 19:43:14 ingress-addon-legacy-673000 kubelet[2627]: W0701 19:43:14.676285    2627 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/2fc34171-d95d-4518-adb4-bd58dc38402a/volumes" does not exist
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-673000 -n ingress-addon-legacy-673000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-673000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-307000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-307000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.8716875s)

                                                
                                                
-- stdout --
	* [mount-start-1-307000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-307000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-307000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-307000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-307000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-307000 -n mount-start-1-307000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-307000 -n mount-start-1-307000: exit status 7 (66.826708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-307000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-757000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-757000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.830878208s)

                                                
                                                
-- stdout --
	* [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-757000 in cluster multinode-757000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-757000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:46:05.435673    2775 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:46:05.435800    2775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:46:05.435803    2775 out.go:309] Setting ErrFile to fd 2...
	I0701 12:46:05.435805    2775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:46:05.435883    2775 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:46:05.436943    2775 out.go:303] Setting JSON to false
	I0701 12:46:05.452149    2775 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":935,"bootTime":1688239830,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:46:05.452223    2775 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:46:05.459610    2775 out.go:177] * [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:46:05.467952    2775 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:46:05.468015    2775 notify.go:220] Checking for updates...
	I0701 12:46:05.474949    2775 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:46:05.478002    2775 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:46:05.480989    2775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:46:05.483938    2775 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:46:05.486986    2775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:46:05.490073    2775 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:46:05.493969    2775 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:46:05.499820    2775 start.go:297] selected driver: qemu2
	I0701 12:46:05.499825    2775 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:46:05.499830    2775 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:46:05.501801    2775 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:46:05.504931    2775 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:46:05.508024    2775 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:46:05.508051    2775 cni.go:84] Creating CNI manager for ""
	I0701 12:46:05.508055    2775 cni.go:137] 0 nodes found, recommending kindnet
	I0701 12:46:05.508060    2775 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 12:46:05.508067    2775 start_flags.go:319] config:
	{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0}
	I0701 12:46:05.512069    2775 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:46:05.519933    2775 out.go:177] * Starting control plane node multinode-757000 in cluster multinode-757000
	I0701 12:46:05.523987    2775 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:46:05.524013    2775 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:46:05.524028    2775 cache.go:57] Caching tarball of preloaded images
	I0701 12:46:05.524101    2775 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:46:05.524107    2775 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:46:05.524297    2775 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/multinode-757000/config.json ...
	I0701 12:46:05.524314    2775 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/multinode-757000/config.json: {Name:mkef8d5815bf054212f537a3f033941b4b99f362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:46:05.524534    2775 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:46:05.524567    2775 start.go:369] acquired machines lock for "multinode-757000" in 26.125µs
	I0701 12:46:05.524578    2775 start.go:93] Provisioning new machine with config: &{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:46:05.524605    2775 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:46:05.532962    2775 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:46:05.549446    2775 start.go:159] libmachine.API.Create for "multinode-757000" (driver="qemu2")
	I0701 12:46:05.549469    2775 client.go:168] LocalClient.Create starting
	I0701 12:46:05.549536    2775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:46:05.549555    2775 main.go:141] libmachine: Decoding PEM data...
	I0701 12:46:05.549568    2775 main.go:141] libmachine: Parsing certificate...
	I0701 12:46:05.549598    2775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:46:05.549613    2775 main.go:141] libmachine: Decoding PEM data...
	I0701 12:46:05.549621    2775 main.go:141] libmachine: Parsing certificate...
	I0701 12:46:05.549930    2775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:46:05.656103    2775 main.go:141] libmachine: Creating SSH key...
	I0701 12:46:05.844609    2775 main.go:141] libmachine: Creating Disk image...
	I0701 12:46:05.844618    2775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:46:05.844785    2775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:05.853555    2775 main.go:141] libmachine: STDOUT: 
	I0701 12:46:05.853570    2775 main.go:141] libmachine: STDERR: 
	I0701 12:46:05.853621    2775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2 +20000M
	I0701 12:46:05.860702    2775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:46:05.860717    2775 main.go:141] libmachine: STDERR: 
	I0701 12:46:05.860736    2775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:05.860742    2775 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:46:05.860779    2775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:24:e5:df:39:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:05.862195    2775 main.go:141] libmachine: STDOUT: 
	I0701 12:46:05.862208    2775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:46:05.862223    2775 client.go:171] LocalClient.Create took 312.755792ms
	I0701 12:46:07.864399    2775 start.go:128] duration metric: createHost completed in 2.339811333s
	I0701 12:46:07.864485    2775 start.go:83] releasing machines lock for "multinode-757000", held for 2.339952667s
	W0701 12:46:07.864563    2775 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:46:07.872879    2775 out.go:177] * Deleting "multinode-757000" in qemu2 ...
	W0701 12:46:07.890061    2775 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:46:07.890094    2775 start.go:687] Will try again in 5 seconds ...
	I0701 12:46:12.892296    2775 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:46:12.892807    2775 start.go:369] acquired machines lock for "multinode-757000" in 391µs
	I0701 12:46:12.892935    2775 start.go:93] Provisioning new machine with config: &{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:46:12.893243    2775 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:46:12.901187    2775 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:46:12.946144    2775 start.go:159] libmachine.API.Create for "multinode-757000" (driver="qemu2")
	I0701 12:46:12.946202    2775 client.go:168] LocalClient.Create starting
	I0701 12:46:12.946327    2775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:46:12.946367    2775 main.go:141] libmachine: Decoding PEM data...
	I0701 12:46:12.946391    2775 main.go:141] libmachine: Parsing certificate...
	I0701 12:46:12.946477    2775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:46:12.946504    2775 main.go:141] libmachine: Decoding PEM data...
	I0701 12:46:12.946522    2775 main.go:141] libmachine: Parsing certificate...
	I0701 12:46:12.947079    2775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:46:13.133859    2775 main.go:141] libmachine: Creating SSH key...
	I0701 12:46:13.183811    2775 main.go:141] libmachine: Creating Disk image...
	I0701 12:46:13.183818    2775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:46:13.183966    2775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:13.192447    2775 main.go:141] libmachine: STDOUT: 
	I0701 12:46:13.192462    2775 main.go:141] libmachine: STDERR: 
	I0701 12:46:13.192523    2775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2 +20000M
	I0701 12:46:13.199630    2775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:46:13.199642    2775 main.go:141] libmachine: STDERR: 
	I0701 12:46:13.199672    2775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:13.199678    2775 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:46:13.199711    2775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:44:10:03:fa:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:46:13.201184    2775 main.go:141] libmachine: STDOUT: 
	I0701 12:46:13.201196    2775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:46:13.201207    2775 client.go:171] LocalClient.Create took 255.003375ms
	I0701 12:46:15.203342    2775 start.go:128] duration metric: createHost completed in 2.310093667s
	I0701 12:46:15.203439    2775 start.go:83] releasing machines lock for "multinode-757000", held for 2.310623042s
	W0701 12:46:15.203809    2775 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:46:15.211499    2775 out.go:177] 
	W0701 12:46:15.215556    2775 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:46:15.215579    2775 out.go:239] * 
	* 
	W0701 12:46:15.218263    2775 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:46:15.226542    2775 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-757000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (67.283125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (117.299583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-757000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- rollout status deployment/busybox: exit status 1 (53.668167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.3045ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.61575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.791208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.634375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.636208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.682625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0701 12:46:29.372039    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.488667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.062792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.109ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0701 12:47:21.090682    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.097058    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.109125    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.131248    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.173313    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.255457    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.417438    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:21.739721    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:22.382073    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.770458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0701 12:47:23.664247    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:26.226364    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:31.348611    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:47:41.590771    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:48:02.072769    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.880209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.547792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.io: exit status 1 (52.632333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.default: exit status 1 (52.71525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.401833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (27.979459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-757000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (52.757375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (27.935167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-757000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-757000 -v 3 --alsologtostderr: exit status 89 (42.495208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-757000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:12.123157    2889 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:12.123363    2889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.123365    2889 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:12.123368    2889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.123432    2889 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:12.123654    2889 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:12.123824    2889 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:12.128629    2889 out.go:177] * The control plane node must be running for this command
	I0701 12:48:12.133765    2889 out.go:177]   To start a cluster, run: "minikube start -p multinode-757000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-757000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (27.998417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-757000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-757000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-757000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.3\",\"ClusterName\":\"multinode-757000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (31.483541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status --output json --alsologtostderr: exit status 7 (27.688333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-757000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:12.351459    2899 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:12.351582    2899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.351585    2899 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:12.351587    2899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.351655    2899 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:12.351768    2899 out.go:303] Setting JSON to true
	I0701 12:48:12.351786    2899 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:12.351822    2899 notify.go:220] Checking for updates...
	I0701 12:48:12.351949    2899 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:12.351954    2899 status.go:255] checking status of multinode-757000 ...
	I0701 12:48:12.352133    2899 status.go:330] multinode-757000 host status = "Stopped" (err=<nil>)
	I0701 12:48:12.352140    2899 status.go:343] host is not running, skipping remaining checks
	I0701 12:48:12.352143    2899 status.go:257] multinode-757000 status: &{Name:multinode-757000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-757000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (28.484958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 node stop m03: exit status 85 (45.703625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-757000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status: exit status 7 (27.732333ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr: exit status 7 (28.085875ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:12.482299    2907 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:12.482450    2907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.482453    2907 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:12.482455    2907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.482531    2907 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:12.482649    2907 out.go:303] Setting JSON to false
	I0701 12:48:12.482666    2907 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:12.482712    2907 notify.go:220] Checking for updates...
	I0701 12:48:12.482872    2907 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:12.482877    2907 status.go:255] checking status of multinode-757000 ...
	I0701 12:48:12.483066    2907 status.go:330] multinode-757000 host status = "Stopped" (err=<nil>)
	I0701 12:48:12.483070    2907 status.go:343] host is not running, skipping remaining checks
	I0701 12:48:12.483072    2907 status.go:257] multinode-757000 status: &{Name:multinode-757000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr": multinode-757000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (28.045375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 node start m03 --alsologtostderr: exit status 85 (43.643375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:12.538988    2911 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:12.539195    2911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.539199    2911 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:12.539201    2911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.539267    2911 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:12.539495    2911 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:12.539677    2911 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:12.543151    2911 out.go:177] 
	W0701 12:48:12.546134    2911 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0701 12:48:12.546138    2911 out.go:239] * 
	* 
	W0701 12:48:12.547764    2911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:48:12.551080    2911 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0701 12:48:12.538988    2911 out.go:296] Setting OutFile to fd 1 ...
I0701 12:48:12.539195    2911 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:48:12.539199    2911 out.go:309] Setting ErrFile to fd 2...
I0701 12:48:12.539201    2911 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:48:12.539267    2911 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:48:12.539495    2911 mustload.go:65] Loading cluster: multinode-757000
I0701 12:48:12.539677    2911 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:48:12.543151    2911 out.go:177] 
W0701 12:48:12.546134    2911 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0701 12:48:12.546138    2911 out.go:239] * 
* 
W0701 12:48:12.547764    2911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 12:48:12.551080    2911 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-757000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status: exit status 7 (28.014416ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-757000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (28.010834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-757000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-757000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-757000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-757000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.181442792s)

                                                
                                                
-- stdout --
	* [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-757000 in cluster multinode-757000
	* Restarting existing qemu2 VM for "multinode-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:12.724085    2921 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:12.724200    2921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.724203    2921 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:12.724205    2921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:12.724287    2921 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:12.725269    2921 out.go:303] Setting JSON to false
	I0701 12:48:12.740266    2921 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1062,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:48:12.740332    2921 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:48:12.745166    2921 out.go:177] * [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:48:12.752121    2921 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:48:12.756104    2921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:48:12.752197    2921 notify.go:220] Checking for updates...
	I0701 12:48:12.763092    2921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:48:12.767064    2921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:48:12.770025    2921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:48:12.773128    2921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:48:12.776362    2921 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:12.776410    2921 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:48:12.781051    2921 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:48:12.788135    2921 start.go:297] selected driver: qemu2
	I0701 12:48:12.788141    2921 start.go:944] validating driver "qemu2" against &{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:48:12.788210    2921 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:48:12.790170    2921 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:48:12.790192    2921 cni.go:84] Creating CNI manager for ""
	I0701 12:48:12.790197    2921 cni.go:137] 1 nodes found, recommending kindnet
	I0701 12:48:12.790201    2921 start_flags.go:319] config:
	{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:48:12.794230    2921 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:12.802086    2921 out.go:177] * Starting control plane node multinode-757000 in cluster multinode-757000
	I0701 12:48:12.805018    2921 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:48:12.805039    2921 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:48:12.805050    2921 cache.go:57] Caching tarball of preloaded images
	I0701 12:48:12.805104    2921 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:48:12.805110    2921 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:48:12.805165    2921 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/multinode-757000/config.json ...
	I0701 12:48:12.805535    2921 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:12.805567    2921 start.go:369] acquired machines lock for "multinode-757000" in 26.375µs
	I0701 12:48:12.805579    2921 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:48:12.805585    2921 fix.go:54] fixHost starting: 
	I0701 12:48:12.805704    2921 fix.go:102] recreateIfNeeded on multinode-757000: state=Stopped err=<nil>
	W0701 12:48:12.805713    2921 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:48:12.813084    2921 out.go:177] * Restarting existing qemu2 VM for "multinode-757000" ...
	I0701 12:48:12.817099    2921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:44:10:03:fa:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:48:12.819006    2921 main.go:141] libmachine: STDOUT: 
	I0701 12:48:12.819022    2921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:12.819048    2921 fix.go:56] fixHost completed within 13.465166ms
	I0701 12:48:12.819054    2921 start.go:83] releasing machines lock for "multinode-757000", held for 13.481291ms
	W0701 12:48:12.819060    2921 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:48:12.819100    2921 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:12.819105    2921 start.go:687] Will try again in 5 seconds ...
	I0701 12:48:17.821203    2921 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:17.821514    2921 start.go:369] acquired machines lock for "multinode-757000" in 249.875µs
	I0701 12:48:17.821630    2921 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:48:17.821648    2921 fix.go:54] fixHost starting: 
	I0701 12:48:17.822317    2921 fix.go:102] recreateIfNeeded on multinode-757000: state=Stopped err=<nil>
	W0701 12:48:17.822344    2921 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:48:17.830525    2921 out.go:177] * Restarting existing qemu2 VM for "multinode-757000" ...
	I0701 12:48:17.834902    2921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:44:10:03:fa:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:48:17.843164    2921 main.go:141] libmachine: STDOUT: 
	I0701 12:48:17.843224    2921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:17.843287    2921 fix.go:56] fixHost completed within 21.639875ms
	I0701 12:48:17.843302    2921 start.go:83] releasing machines lock for "multinode-757000", held for 21.768667ms
	W0701 12:48:17.843453    2921 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:17.851624    2921 out.go:177] 
	W0701 12:48:17.855760    2921 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:48:17.855841    2921 out.go:239] * 
	* 
	W0701 12:48:17.859926    2921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:48:17.867702    2921 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-757000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-757000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (30.257458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 node delete m03: exit status 89 (37.91025ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-757000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-757000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr: exit status 7 (27.673125ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:18.042542    2935 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:18.042662    2935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.042665    2935 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:18.042667    2935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.042731    2935 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:18.042835    2935 out.go:303] Setting JSON to false
	I0701 12:48:18.042853    2935 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:18.042899    2935 notify.go:220] Checking for updates...
	I0701 12:48:18.043035    2935 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:18.043041    2935 status.go:255] checking status of multinode-757000 ...
	I0701 12:48:18.043235    2935 status.go:330] multinode-757000 host status = "Stopped" (err=<nil>)
	I0701 12:48:18.043240    2935 status.go:343] host is not running, skipping remaining checks
	I0701 12:48:18.043242    2935 status.go:257] multinode-757000 status: &{Name:multinode-757000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (27.833708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status: exit status 7 (28.403667ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr: exit status 7 (27.542084ms)

                                                
                                                
-- stdout --
	multinode-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:18.186956    2943 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:18.187081    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.187083    2943 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:18.187086    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.187153    2943 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:18.187265    2943 out.go:303] Setting JSON to false
	I0701 12:48:18.187275    2943 mustload.go:65] Loading cluster: multinode-757000
	I0701 12:48:18.187342    2943 notify.go:220] Checking for updates...
	I0701 12:48:18.187446    2943 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:18.187452    2943 status.go:255] checking status of multinode-757000 ...
	I0701 12:48:18.187644    2943 status.go:330] multinode-757000 host status = "Stopped" (err=<nil>)
	I0701 12:48:18.187648    2943 status.go:343] host is not running, skipping remaining checks
	I0701 12:48:18.187650    2943 status.go:257] multinode-757000 status: &{Name:multinode-757000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr": multinode-757000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-757000 status --alsologtostderr": multinode-757000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (27.418417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-757000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-757000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.173809541s)

                                                
                                                
-- stdout --
	* [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-757000 in cluster multinode-757000
	* Restarting existing qemu2 VM for "multinode-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:18.241900    2947 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:18.242026    2947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.242029    2947 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:18.242031    2947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:18.242117    2947 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:18.243045    2947 out.go:303] Setting JSON to false
	I0701 12:48:18.258080    2947 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1068,"bootTime":1688239830,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:48:18.258146    2947 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:48:18.263407    2947 out.go:177] * [multinode-757000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:48:18.270420    2947 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:48:18.274347    2947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:48:18.270470    2947 notify.go:220] Checking for updates...
	I0701 12:48:18.278394    2947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:48:18.279912    2947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:48:18.283421    2947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:48:18.286392    2947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:48:18.289694    2947 config.go:182] Loaded profile config "multinode-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:18.289946    2947 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:48:18.294372    2947 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:48:18.301370    2947 start.go:297] selected driver: qemu2
	I0701 12:48:18.301375    2947 start.go:944] validating driver "qemu2" against &{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:48:18.301424    2947 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:48:18.303281    2947 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:48:18.303304    2947 cni.go:84] Creating CNI manager for ""
	I0701 12:48:18.303309    2947 cni.go:137] 1 nodes found, recommending kindnet
	I0701 12:48:18.303314    2947 start_flags.go:319] config:
	{Name:multinode-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-757000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:48:18.307091    2947 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:18.314437    2947 out.go:177] * Starting control plane node multinode-757000 in cluster multinode-757000
	I0701 12:48:18.318271    2947 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:48:18.318292    2947 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:48:18.318305    2947 cache.go:57] Caching tarball of preloaded images
	I0701 12:48:18.318358    2947 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:48:18.318364    2947 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:48:18.318423    2947 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/multinode-757000/config.json ...
	I0701 12:48:18.318789    2947 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:18.318812    2947 start.go:369] acquired machines lock for "multinode-757000" in 17.833µs
	I0701 12:48:18.318822    2947 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:48:18.318826    2947 fix.go:54] fixHost starting: 
	I0701 12:48:18.318929    2947 fix.go:102] recreateIfNeeded on multinode-757000: state=Stopped err=<nil>
	W0701 12:48:18.318937    2947 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:48:18.325397    2947 out.go:177] * Restarting existing qemu2 VM for "multinode-757000" ...
	I0701 12:48:18.329394    2947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:44:10:03:fa:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:48:18.331077    2947 main.go:141] libmachine: STDOUT: 
	I0701 12:48:18.331090    2947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:18.331116    2947 fix.go:56] fixHost completed within 12.290416ms
	I0701 12:48:18.331120    2947 start.go:83] releasing machines lock for "multinode-757000", held for 12.30525ms
	W0701 12:48:18.331127    2947 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:48:18.331172    2947 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:18.331176    2947 start.go:687] Will try again in 5 seconds ...
	I0701 12:48:23.333256    2947 start.go:365] acquiring machines lock for multinode-757000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:23.333794    2947 start.go:369] acquired machines lock for "multinode-757000" in 406.917µs
	I0701 12:48:23.333960    2947 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:48:23.333982    2947 fix.go:54] fixHost starting: 
	I0701 12:48:23.334790    2947 fix.go:102] recreateIfNeeded on multinode-757000: state=Stopped err=<nil>
	W0701 12:48:23.334817    2947 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:48:23.343206    2947 out.go:177] * Restarting existing qemu2 VM for "multinode-757000" ...
	I0701 12:48:23.347413    2947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:44:10:03:fa:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/multinode-757000/disk.qcow2
	I0701 12:48:23.356430    2947 main.go:141] libmachine: STDOUT: 
	I0701 12:48:23.356469    2947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:23.356551    2947 fix.go:56] fixHost completed within 22.571792ms
	I0701 12:48:23.356569    2947 start.go:83] releasing machines lock for "multinode-757000", held for 22.74075ms
	W0701 12:48:23.356725    2947 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:23.364194    2947 out.go:177] 
	W0701 12:48:23.368272    2947 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:48:23.368329    2947 out.go:239] * 
	* 
	W0701 12:48:23.372191    2947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:48:23.377199    2947 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-757000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (66.484125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-757000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-757000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-757000-m01 --driver=qemu2 : exit status 80 (9.974611042s)

                                                
                                                
-- stdout --
	* [multinode-757000-m01] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-757000-m01 in cluster multinode-757000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-757000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-757000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-757000-m02 --driver=qemu2 
E0701 12:48:43.034674    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-757000-m02 --driver=qemu2 : exit status 80 (9.903078s)

                                                
                                                
-- stdout --
	* [multinode-757000-m02] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-757000-m02 in cluster multinode-757000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-757000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-757000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-757000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-757000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-757000: exit status 89 (77.890792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-757000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-757000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-757000 -n multinode-757000: exit status 7 (31.114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-779000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0701 12:48:45.503144    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-779000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.844024417s)

                                                
                                                
-- stdout --
	* [test-preload-779000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-779000 in cluster test-preload-779000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-779000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:48:43.758282    3009 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:48:43.758398    3009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:43.758400    3009 out.go:309] Setting ErrFile to fd 2...
	I0701 12:48:43.758402    3009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:48:43.758462    3009 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:48:43.759694    3009 out.go:303] Setting JSON to false
	I0701 12:48:43.775938    3009 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1093,"bootTime":1688239830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:48:43.776012    3009 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:48:43.781011    3009 out.go:177] * [test-preload-779000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:48:43.788974    3009 notify.go:220] Checking for updates...
	I0701 12:48:43.792970    3009 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:48:43.794431    3009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:48:43.797957    3009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:48:43.800972    3009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:48:43.803983    3009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:48:43.806946    3009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:48:43.810307    3009 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:48:43.810360    3009 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:48:43.815005    3009 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:48:43.821934    3009 start.go:297] selected driver: qemu2
	I0701 12:48:43.821940    3009 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:48:43.821948    3009 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:48:43.823936    3009 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:48:43.826985    3009 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:48:43.830049    3009 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:48:43.830064    3009 cni.go:84] Creating CNI manager for ""
	I0701 12:48:43.830070    3009 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:48:43.830073    3009 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:48:43.830078    3009 start_flags.go:319] config:
	{Name:test-preload-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0701 12:48:43.834264    3009 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.840820    3009 out.go:177] * Starting control plane node test-preload-779000 in cluster test-preload-779000
	I0701 12:48:43.844935    3009 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0701 12:48:43.845035    3009 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/test-preload-779000/config.json ...
	I0701 12:48:43.845062    3009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/test-preload-779000/config.json: {Name:mka93071d3cfc049667bdd7994f91bb4083e9f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:48:43.845044    3009 cache.go:107] acquiring lock: {Name:mk71b444eadbf49d353c223d7a0ae7d698bf0b44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845062    3009 cache.go:107] acquiring lock: {Name:mk440b88153fd8cc9dd0202761d42084ab452bc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845075    3009 cache.go:107] acquiring lock: {Name:mk8a69884e4b42a433b4a63b98847f05a51050b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845096    3009 cache.go:107] acquiring lock: {Name:mk14caa435dc09004b5bc4f3cbf38cb85560e55b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845097    3009 cache.go:107] acquiring lock: {Name:mkdbce62d00e379b9cef851af63b783333395018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845268    3009 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:48:43.845252    3009 cache.go:107] acquiring lock: {Name:mk2e510b5e0c7e8fb4e00b09399521de29d2d0ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845293    3009 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0701 12:48:43.845285    3009 cache.go:107] acquiring lock: {Name:mk2d21cba38ec36c405b1e7dbd4a33953e4e43d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845282    3009 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0701 12:48:43.845333    3009 cache.go:107] acquiring lock: {Name:mka63d9eed8f8323444dab0e97798d6ceceb72e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:48:43.845282    3009 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0701 12:48:43.845466    3009 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0701 12:48:43.845484    3009 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0701 12:48:43.845504    3009 start.go:365] acquiring machines lock for test-preload-779000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:43.845539    3009 start.go:369] acquired machines lock for "test-preload-779000" in 27.833µs
	I0701 12:48:43.845596    3009 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0701 12:48:43.845552    3009 start.go:93] Provisioning new machine with config: &{Name:test-preload-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:48:43.845615    3009 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:48:43.853953    3009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:48:43.845652    3009 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 12:48:43.859538    3009 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 12:48:43.859565    3009 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0701 12:48:43.859571    3009 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0701 12:48:43.859594    3009 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0701 12:48:43.859707    3009 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0701 12:48:43.863039    3009 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0701 12:48:43.863514    3009 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0701 12:48:43.863610    3009 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0701 12:48:43.870260    3009 start.go:159] libmachine.API.Create for "test-preload-779000" (driver="qemu2")
	I0701 12:48:43.870275    3009 client.go:168] LocalClient.Create starting
	I0701 12:48:43.870336    3009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:48:43.870356    3009 main.go:141] libmachine: Decoding PEM data...
	I0701 12:48:43.870369    3009 main.go:141] libmachine: Parsing certificate...
	I0701 12:48:43.870415    3009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:48:43.870430    3009 main.go:141] libmachine: Decoding PEM data...
	I0701 12:48:43.870441    3009 main.go:141] libmachine: Parsing certificate...
	I0701 12:48:43.870738    3009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:48:44.040524    3009 main.go:141] libmachine: Creating SSH key...
	I0701 12:48:44.136934    3009 main.go:141] libmachine: Creating Disk image...
	I0701 12:48:44.136952    3009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:48:44.137125    3009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:44.145899    3009 main.go:141] libmachine: STDOUT: 
	I0701 12:48:44.145938    3009 main.go:141] libmachine: STDERR: 
	I0701 12:48:44.146001    3009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2 +20000M
	I0701 12:48:44.153806    3009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:48:44.153827    3009 main.go:141] libmachine: STDERR: 
	I0701 12:48:44.153850    3009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:44.153861    3009 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:48:44.153903    3009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:36:39:6f:5f:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:44.156100    3009 main.go:141] libmachine: STDOUT: 
	I0701 12:48:44.156121    3009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:44.156144    3009 client.go:171] LocalClient.Create took 285.868542ms
	W0701 12:48:44.840737    3009 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0701 12:48:44.840764    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0701 12:48:45.099250    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 12:48:45.099274    3009 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.254256084s
	I0701 12:48:45.099282    3009 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 12:48:45.354051    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0701 12:48:45.356556    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0701 12:48:45.392103    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0701 12:48:45.527830    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0701 12:48:45.527860    3009 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.682688125s
	I0701 12:48:45.527869    3009 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0701 12:48:45.564043    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0701 12:48:45.588130    3009 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0701 12:48:45.588148    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0701 12:48:45.864238    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0701 12:48:46.079337    3009 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0701 12:48:46.156399    3009 start.go:128] duration metric: createHost completed in 2.310808583s
	I0701 12:48:46.156439    3009 start.go:83] releasing machines lock for "test-preload-779000", held for 2.310934417s
	W0701 12:48:46.156498    3009 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:46.165547    3009 out.go:177] * Deleting "test-preload-779000" in qemu2 ...
	W0701 12:48:46.184889    3009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:46.184932    3009 start.go:687] Will try again in 5 seconds ...
	I0701 12:48:47.900784    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0701 12:48:47.900837    3009 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.055830542s
	I0701 12:48:47.900870    3009 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0701 12:48:48.057523    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0701 12:48:48.057565    3009 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.21237s
	I0701 12:48:48.057608    3009 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0701 12:48:48.787737    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0701 12:48:48.787793    3009 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.942834084s
	I0701 12:48:48.787826    3009 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0701 12:48:50.019513    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0701 12:48:50.019561    3009 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.17460925s
	I0701 12:48:50.019601    3009 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0701 12:48:50.037211    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0701 12:48:50.037248    3009 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.192269833s
	I0701 12:48:50.037302    3009 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0701 12:48:51.185108    3009 start.go:365] acquiring machines lock for test-preload-779000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:48:51.185560    3009 start.go:369] acquired machines lock for "test-preload-779000" in 349.083µs
	I0701 12:48:51.185662    3009 start.go:93] Provisioning new machine with config: &{Name:test-preload-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:48:51.185945    3009 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:48:51.196509    3009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:48:51.245782    3009 start.go:159] libmachine.API.Create for "test-preload-779000" (driver="qemu2")
	I0701 12:48:51.245832    3009 client.go:168] LocalClient.Create starting
	I0701 12:48:51.245951    3009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:48:51.245999    3009 main.go:141] libmachine: Decoding PEM data...
	I0701 12:48:51.246038    3009 main.go:141] libmachine: Parsing certificate...
	I0701 12:48:51.246138    3009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:48:51.246166    3009 main.go:141] libmachine: Decoding PEM data...
	I0701 12:48:51.246179    3009 main.go:141] libmachine: Parsing certificate...
	I0701 12:48:51.246748    3009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:48:51.400213    3009 main.go:141] libmachine: Creating SSH key...
	I0701 12:48:51.518316    3009 main.go:141] libmachine: Creating Disk image...
	I0701 12:48:51.518326    3009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:48:51.518470    3009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:51.527024    3009 main.go:141] libmachine: STDOUT: 
	I0701 12:48:51.527039    3009 main.go:141] libmachine: STDERR: 
	I0701 12:48:51.527104    3009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2 +20000M
	I0701 12:48:51.534239    3009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:48:51.534289    3009 main.go:141] libmachine: STDERR: 
	I0701 12:48:51.534300    3009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:51.534310    3009 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:48:51.534343    3009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:30:0b:ce:54:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/test-preload-779000/disk.qcow2
	I0701 12:48:51.535874    3009 main.go:141] libmachine: STDOUT: 
	I0701 12:48:51.535887    3009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:48:51.535900    3009 client.go:171] LocalClient.Create took 290.064625ms
	I0701 12:48:53.224075    3009 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0701 12:48:53.224150    3009 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.379110541s
	I0701 12:48:53.224181    3009 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0701 12:48:53.224232    3009 cache.go:87] Successfully saved all images to host disk.
	I0701 12:48:53.538052    3009 start.go:128] duration metric: createHost completed in 2.352125208s
	I0701 12:48:53.538102    3009 start.go:83] releasing machines lock for "test-preload-779000", held for 2.352563584s
	W0701 12:48:53.538454    3009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-779000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-779000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:48:53.546826    3009 out.go:177] 
	W0701 12:48:53.550994    3009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:48:53.551025    3009 out.go:239] * 
	* 
	W0701 12:48:53.553988    3009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:48:53.561939    3009 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-779000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-07-01 12:48:53.580452 -0700 PDT m=+848.651527793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-779000 -n test-preload-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-779000 -n test-preload-779000: exit status 7 (66.714042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-779000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-779000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-779000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-109000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-109000 --memory=2048 --driver=qemu2 : exit status 80 (9.817245167s)

                                                
                                                
-- stdout --
	* [scheduled-stop-109000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-109000 in cluster scheduled-stop-109000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-109000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-109000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-109000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-109000 in cluster scheduled-stop-109000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-109000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-109000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-07-01 12:49:03.559248 -0700 PDT m=+858.630512418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-109000 -n scheduled-stop-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-109000 -n scheduled-stop-109000: exit status 7 (67.542625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-109000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-109000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (12.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3675386991 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-634000 --memory=2600 --driver=qemu2 
E0701 12:49:13.211642    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-634000 --memory=2600 --driver=qemu2 : exit status 80 (9.866115083s)

                                                
                                                
-- stdout --
	* [skaffold-634000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-634000 in cluster skaffold-634000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-634000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-634000 in cluster skaffold-634000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-07-01 12:49:15.70386 -0700 PDT m=+870.775353418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-634000 -n skaffold-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-634000 -n skaffold-634000: exit status 7 (62.219333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-634000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-634000
--- FAIL: TestSkaffold (12.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (173.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0701 12:50:04.954691    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:52:21.084856    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
E0701 12:52:48.793998    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/ingress-addon-legacy-673000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-01 12:52:49.774663 -0700 PDT m=+1084.850200501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-981000 -n running-upgrade-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-981000 -n running-upgrade-981000: exit status 85 (83.773ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-981000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-981000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-981000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-981000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-981000\"")
helpers_test.go:175: Cleaning up "running-upgrade-981000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-981000
--- FAIL: TestRunningBinaryUpgrade (173.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.966682s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-256000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-256000 in cluster kubernetes-upgrade-256000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-256000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:52:50.171067    3530 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:52:50.171185    3530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:52:50.171188    3530 out.go:309] Setting ErrFile to fd 2...
	I0701 12:52:50.171190    3530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:52:50.171268    3530 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:52:50.172242    3530 out.go:303] Setting JSON to false
	I0701 12:52:50.187313    3530 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1340,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:52:50.187369    3530 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:52:50.192259    3530 out.go:177] * [kubernetes-upgrade-256000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:52:50.199411    3530 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:52:50.199438    3530 notify.go:220] Checking for updates...
	I0701 12:52:50.203295    3530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:52:50.206278    3530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:52:50.209287    3530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:52:50.212230    3530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:52:50.215304    3530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:52:50.218614    3530 config.go:182] Loaded profile config "cert-expiration-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:52:50.218670    3530 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:52:50.218708    3530 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:52:50.223281    3530 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:52:50.230307    3530 start.go:297] selected driver: qemu2
	I0701 12:52:50.230316    3530 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:52:50.230332    3530 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:52:50.232273    3530 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:52:50.235288    3530 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:52:50.238321    3530 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:52:50.238336    3530 cni.go:84] Creating CNI manager for ""
	I0701 12:52:50.238342    3530 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:52:50.238346    3530 start_flags.go:319] config:
	{Name:kubernetes-upgrade-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-256000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:52:50.242278    3530 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:52:50.247288    3530 out.go:177] * Starting control plane node kubernetes-upgrade-256000 in cluster kubernetes-upgrade-256000
	I0701 12:52:50.251303    3530 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:52:50.251327    3530 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:52:50.251340    3530 cache.go:57] Caching tarball of preloaded images
	I0701 12:52:50.251407    3530 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:52:50.251412    3530 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0701 12:52:50.251476    3530 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kubernetes-upgrade-256000/config.json ...
	I0701 12:52:50.251489    3530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kubernetes-upgrade-256000/config.json: {Name:mkd2d315627de28c7751c76f672a1b1de73f7d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:52:50.251694    3530 start.go:365] acquiring machines lock for kubernetes-upgrade-256000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:52:50.251726    3530 start.go:369] acquired machines lock for "kubernetes-upgrade-256000" in 26.125µs
	I0701 12:52:50.251739    3530 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-256000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:52:50.251762    3530 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:52:50.260299    3530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:52:50.276012    3530 start.go:159] libmachine.API.Create for "kubernetes-upgrade-256000" (driver="qemu2")
	I0701 12:52:50.276028    3530 client.go:168] LocalClient.Create starting
	I0701 12:52:50.276075    3530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:52:50.276093    3530 main.go:141] libmachine: Decoding PEM data...
	I0701 12:52:50.276101    3530 main.go:141] libmachine: Parsing certificate...
	I0701 12:52:50.276127    3530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:52:50.276140    3530 main.go:141] libmachine: Decoding PEM data...
	I0701 12:52:50.276147    3530 main.go:141] libmachine: Parsing certificate...
	I0701 12:52:50.276419    3530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:52:50.407492    3530 main.go:141] libmachine: Creating SSH key...
	I0701 12:52:50.500981    3530 main.go:141] libmachine: Creating Disk image...
	I0701 12:52:50.500989    3530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:52:50.501144    3530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:50.509635    3530 main.go:141] libmachine: STDOUT: 
	I0701 12:52:50.509647    3530 main.go:141] libmachine: STDERR: 
	I0701 12:52:50.509695    3530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2 +20000M
	I0701 12:52:50.516826    3530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:52:50.516849    3530 main.go:141] libmachine: STDERR: 
	I0701 12:52:50.516874    3530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:50.516883    3530 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:52:50.516911    3530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c9:b3:fb:4e:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:50.518492    3530 main.go:141] libmachine: STDOUT: 
	I0701 12:52:50.518513    3530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:52:50.518534    3530 client.go:171] LocalClient.Create took 242.507583ms
	I0701 12:52:52.520767    3530 start.go:128] duration metric: createHost completed in 2.269027709s
	I0701 12:52:52.520833    3530 start.go:83] releasing machines lock for "kubernetes-upgrade-256000", held for 2.269140792s
	W0701 12:52:52.520894    3530 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:52:52.538312    3530 out.go:177] * Deleting "kubernetes-upgrade-256000" in qemu2 ...
	W0701 12:52:52.559579    3530 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:52:52.559608    3530 start.go:687] Will try again in 5 seconds ...
	I0701 12:52:57.561712    3530 start.go:365] acquiring machines lock for kubernetes-upgrade-256000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:52:57.574236    3530 start.go:369] acquired machines lock for "kubernetes-upgrade-256000" in 12.453541ms
	I0701 12:52:57.574313    3530 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-256000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:52:57.574569    3530 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:52:57.586010    3530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:52:57.630015    3530 start.go:159] libmachine.API.Create for "kubernetes-upgrade-256000" (driver="qemu2")
	I0701 12:52:57.630057    3530 client.go:168] LocalClient.Create starting
	I0701 12:52:57.630159    3530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:52:57.630194    3530 main.go:141] libmachine: Decoding PEM data...
	I0701 12:52:57.630218    3530 main.go:141] libmachine: Parsing certificate...
	I0701 12:52:57.630301    3530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:52:57.630328    3530 main.go:141] libmachine: Decoding PEM data...
	I0701 12:52:57.630342    3530 main.go:141] libmachine: Parsing certificate...
	I0701 12:52:57.630879    3530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:52:57.827143    3530 main.go:141] libmachine: Creating SSH key...
	I0701 12:52:58.057087    3530 main.go:141] libmachine: Creating Disk image...
	I0701 12:52:58.057097    3530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:52:58.057245    3530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:58.066121    3530 main.go:141] libmachine: STDOUT: 
	I0701 12:52:58.066138    3530 main.go:141] libmachine: STDERR: 
	I0701 12:52:58.066198    3530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2 +20000M
	I0701 12:52:58.073436    3530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:52:58.073453    3530 main.go:141] libmachine: STDERR: 
	I0701 12:52:58.073469    3530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:58.073475    3530 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:52:58.073527    3530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:23:d0:9e:71:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:52:58.075081    3530 main.go:141] libmachine: STDOUT: 
	I0701 12:52:58.075094    3530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:52:58.075107    3530 client.go:171] LocalClient.Create took 445.054208ms
	I0701 12:53:00.075946    3530 start.go:128] duration metric: createHost completed in 2.501403958s
	I0701 12:53:00.075990    3530 start.go:83] releasing machines lock for "kubernetes-upgrade-256000", held for 2.501784417s
	W0701 12:53:00.076359    3530 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:00.082854    3530 out.go:177] 
	W0701 12:53:00.086916    3530 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:53:00.086943    3530 out.go:239] * 
	* 
	W0701 12:53:00.089505    3530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:53:00.097854    3530 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-256000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-256000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-256000 status --format={{.Host}}: exit status 7 (35.992583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182079375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-256000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-256000 in cluster kubernetes-upgrade-256000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:53:00.277099    3562 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:53:00.277232    3562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:00.277236    3562 out.go:309] Setting ErrFile to fd 2...
	I0701 12:53:00.277238    3562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:00.277316    3562 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:53:00.278159    3562 out.go:303] Setting JSON to false
	I0701 12:53:00.293032    3562 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1350,"bootTime":1688239830,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:53:00.293101    3562 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:53:00.298235    3562 out.go:177] * [kubernetes-upgrade-256000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:53:00.305242    3562 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:53:00.309172    3562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:53:00.305313    3562 notify.go:220] Checking for updates...
	I0701 12:53:00.317021    3562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:53:00.321168    3562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:53:00.324226    3562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:53:00.325374    3562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:53:00.328397    3562 config.go:182] Loaded profile config "kubernetes-upgrade-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0701 12:53:00.328648    3562 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:53:00.332251    3562 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:53:00.337153    3562 start.go:297] selected driver: qemu2
	I0701 12:53:00.337157    3562 start.go:944] validating driver "qemu2" against &{Name:kubernetes-upgrade-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-256000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:53:00.337206    3562 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:53:00.339240    3562 cni.go:84] Creating CNI manager for ""
	I0701 12:53:00.339254    3562 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:53:00.339259    3562 start_flags.go:319] config:
	{Name:kubernetes-upgrade-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-256000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:53:00.343344    3562 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:53:00.351183    3562 out.go:177] * Starting control plane node kubernetes-upgrade-256000 in cluster kubernetes-upgrade-256000
	I0701 12:53:00.355198    3562 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:53:00.355218    3562 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:53:00.355225    3562 cache.go:57] Caching tarball of preloaded images
	I0701 12:53:00.355284    3562 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:53:00.355289    3562 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:53:00.355340    3562 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kubernetes-upgrade-256000/config.json ...
	I0701 12:53:00.355708    3562 start.go:365] acquiring machines lock for kubernetes-upgrade-256000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:53:00.355738    3562 start.go:369] acquired machines lock for "kubernetes-upgrade-256000" in 23.625µs
	I0701 12:53:00.355749    3562 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:53:00.355753    3562 fix.go:54] fixHost starting: 
	I0701 12:53:00.355874    3562 fix.go:102] recreateIfNeeded on kubernetes-upgrade-256000: state=Stopped err=<nil>
	W0701 12:53:00.355882    3562 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:53:00.363179    3562 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-256000" ...
	I0701 12:53:00.367239    3562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:23:d0:9e:71:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:53:00.369269    3562 main.go:141] libmachine: STDOUT: 
	I0701 12:53:00.369285    3562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:53:00.369314    3562 fix.go:56] fixHost completed within 13.560917ms
	I0701 12:53:00.369319    3562 start.go:83] releasing machines lock for "kubernetes-upgrade-256000", held for 13.577917ms
	W0701 12:53:00.369327    3562 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:53:00.369375    3562 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:00.369380    3562 start.go:687] Will try again in 5 seconds ...
	I0701 12:53:05.371432    3562 start.go:365] acquiring machines lock for kubernetes-upgrade-256000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:53:05.371802    3562 start.go:369] acquired machines lock for "kubernetes-upgrade-256000" in 287.25µs
	I0701 12:53:05.371969    3562 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:53:05.371987    3562 fix.go:54] fixHost starting: 
	I0701 12:53:05.372773    3562 fix.go:102] recreateIfNeeded on kubernetes-upgrade-256000: state=Stopped err=<nil>
	W0701 12:53:05.372835    3562 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:53:05.378221    3562 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-256000" ...
	I0701 12:53:05.384402    3562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:23:d0:9e:71:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubernetes-upgrade-256000/disk.qcow2
	I0701 12:53:05.392947    3562 main.go:141] libmachine: STDOUT: 
	I0701 12:53:05.392998    3562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:53:05.393092    3562 fix.go:56] fixHost completed within 21.10625ms
	I0701 12:53:05.393111    3562 start.go:83] releasing machines lock for "kubernetes-upgrade-256000", held for 21.285166ms
	W0701 12:53:05.393283    3562 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:05.401201    3562 out.go:177] 
	W0701 12:53:05.409283    3562 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:53:05.409313    3562 out.go:239] * 
	* 
	W0701 12:53:05.411871    3562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:53:05.420176    3562 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-256000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-256000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-256000 version --output=json: exit status 1 (64.293959ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-256000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-07-01 12:53:05.498172 -0700 PDT m=+1100.574006001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-256000 -n kubernetes-upgrade-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-256000 -n kubernetes-upgrade-256000: exit status 7 (32.595125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-256000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-256000
--- FAIL: TestKubernetesUpgrade (15.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15452
- KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current458233496/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15452
- KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current245231608/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (168.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (168.12s)

                                                
                                    
x
+
TestPause/serial/Start (9.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-618000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-618000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.71697925s)

                                                
                                                
-- stdout --
	* [pause-618000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-618000 in cluster pause-618000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-618000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-618000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-618000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-618000 -n pause-618000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-618000 -n pause-618000: exit status 7 (67.944167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-618000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 : exit status 80 (9.704816167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-739000 in cluster NoKubernetes-739000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-739000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-739000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000: exit status 7 (67.208333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 : exit status 80 (5.249295791s)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-739000
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000: exit status 7 (65.425375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239939583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-739000
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000: exit status 7 (71.330208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 : exit status 80 (5.240041667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-739000
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-739000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-739000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-739000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-739000 -n NoKubernetes-739000: exit status 7 (69.039584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-739000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0701 12:53:45.497796    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.7370605s)

                                                
                                                
-- stdout --
	* [auto-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-674000 in cluster auto-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:53:41.718021    3676 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:53:41.718154    3676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:41.718157    3676 out.go:309] Setting ErrFile to fd 2...
	I0701 12:53:41.718160    3676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:41.718229    3676 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:53:41.719224    3676 out.go:303] Setting JSON to false
	I0701 12:53:41.734439    3676 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1391,"bootTime":1688239830,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:53:41.734495    3676 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:53:41.740055    3676 out.go:177] * [auto-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:53:41.747126    3676 notify.go:220] Checking for updates...
	I0701 12:53:41.752041    3676 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:53:41.755060    3676 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:53:41.758078    3676 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:53:41.760993    3676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:53:41.764029    3676 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:53:41.767065    3676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:53:41.770304    3676 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:53:41.770350    3676 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:53:41.775044    3676 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:53:41.782019    3676 start.go:297] selected driver: qemu2
	I0701 12:53:41.782026    3676 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:53:41.782034    3676 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:53:41.783997    3676 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:53:41.786997    3676 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:53:41.790186    3676 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:53:41.790204    3676 cni.go:84] Creating CNI manager for ""
	I0701 12:53:41.790210    3676 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:53:41.790216    3676 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:53:41.790221    3676 start_flags.go:319] config:
	{Name:auto-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0701 12:53:41.794297    3676 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:53:41.801032    3676 out.go:177] * Starting control plane node auto-674000 in cluster auto-674000
	I0701 12:53:41.805062    3676 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:53:41.805093    3676 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:53:41.805109    3676 cache.go:57] Caching tarball of preloaded images
	I0701 12:53:41.805181    3676 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:53:41.805186    3676 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:53:41.805247    3676 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/auto-674000/config.json ...
	I0701 12:53:41.805264    3676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/auto-674000/config.json: {Name:mk1edadfe6a3c1ab39282f1357921df7cbf15a1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:53:41.805463    3676 start.go:365] acquiring machines lock for auto-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:53:41.805492    3676 start.go:369] acquired machines lock for "auto-674000" in 23.75µs
	I0701 12:53:41.805504    3676 start.go:93] Provisioning new machine with config: &{Name:auto-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:53:41.805531    3676 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:53:41.814073    3676 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:53:41.829944    3676 start.go:159] libmachine.API.Create for "auto-674000" (driver="qemu2")
	I0701 12:53:41.829975    3676 client.go:168] LocalClient.Create starting
	I0701 12:53:41.830026    3676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:53:41.830045    3676 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:41.830059    3676 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:41.830105    3676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:53:41.830119    3676 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:41.830126    3676 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:41.830453    3676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:53:41.940664    3676 main.go:141] libmachine: Creating SSH key...
	I0701 12:53:42.033096    3676 main.go:141] libmachine: Creating Disk image...
	I0701 12:53:42.033105    3676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:53:42.033272    3676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:42.041609    3676 main.go:141] libmachine: STDOUT: 
	I0701 12:53:42.041625    3676 main.go:141] libmachine: STDERR: 
	I0701 12:53:42.041676    3676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2 +20000M
	I0701 12:53:42.048824    3676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:53:42.048852    3676 main.go:141] libmachine: STDERR: 
	I0701 12:53:42.048875    3676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:42.048886    3676 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:53:42.048931    3676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ce:ac:64:6d:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:42.050475    3676 main.go:141] libmachine: STDOUT: 
	I0701 12:53:42.050488    3676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:53:42.050507    3676 client.go:171] LocalClient.Create took 220.532875ms
	I0701 12:53:44.052650    3676 start.go:128] duration metric: createHost completed in 2.247144083s
	I0701 12:53:44.052704    3676 start.go:83] releasing machines lock for "auto-674000", held for 2.247241584s
	W0701 12:53:44.052762    3676 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:44.063838    3676 out.go:177] * Deleting "auto-674000" in qemu2 ...
	W0701 12:53:44.082913    3676 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:44.082945    3676 start.go:687] Will try again in 5 seconds ...
	I0701 12:53:49.084563    3676 start.go:365] acquiring machines lock for auto-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:53:49.085067    3676 start.go:369] acquired machines lock for "auto-674000" in 384.416µs
	I0701 12:53:49.085199    3676 start.go:93] Provisioning new machine with config: &{Name:auto-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:53:49.085559    3676 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:53:49.094004    3676 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:53:49.141754    3676 start.go:159] libmachine.API.Create for "auto-674000" (driver="qemu2")
	I0701 12:53:49.141800    3676 client.go:168] LocalClient.Create starting
	I0701 12:53:49.141950    3676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:53:49.142000    3676 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:49.142025    3676 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:49.142121    3676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:53:49.142152    3676 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:49.142168    3676 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:49.142734    3676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:53:49.269252    3676 main.go:141] libmachine: Creating SSH key...
	I0701 12:53:49.368552    3676 main.go:141] libmachine: Creating Disk image...
	I0701 12:53:49.368558    3676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:53:49.368713    3676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:49.377243    3676 main.go:141] libmachine: STDOUT: 
	I0701 12:53:49.377259    3676 main.go:141] libmachine: STDERR: 
	I0701 12:53:49.377327    3676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2 +20000M
	I0701 12:53:49.384583    3676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:53:49.384616    3676 main.go:141] libmachine: STDERR: 
	I0701 12:53:49.384631    3676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:49.384637    3676 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:53:49.384674    3676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:b6:d2:08:f9:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/auto-674000/disk.qcow2
	I0701 12:53:49.386137    3676 main.go:141] libmachine: STDOUT: 
	I0701 12:53:49.386152    3676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:53:49.386169    3676 client.go:171] LocalClient.Create took 244.368333ms
	I0701 12:53:51.388393    3676 start.go:128] duration metric: createHost completed in 2.302837833s
	I0701 12:53:51.388470    3676 start.go:83] releasing machines lock for "auto-674000", held for 2.303420917s
	W0701 12:53:51.388847    3676 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:51.398541    3676 out.go:177] 
	W0701 12:53:51.402590    3676 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:53:51.402625    3676 out.go:239] * 
	* 
	W0701 12:53:51.405038    3676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:53:51.414498    3676 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.863370833s)

                                                
                                                
-- stdout --
	* [kindnet-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-674000 in cluster kindnet-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:53:53.478474    3791 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:53:53.478598    3791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:53.478601    3791 out.go:309] Setting ErrFile to fd 2...
	I0701 12:53:53.478604    3791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:53:53.478674    3791 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:53:53.479737    3791 out.go:303] Setting JSON to false
	I0701 12:53:53.495026    3791 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1403,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:53:53.495079    3791 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:53:53.500844    3791 out.go:177] * [kindnet-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:53:53.507813    3791 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:53:53.507833    3791 notify.go:220] Checking for updates...
	I0701 12:53:53.515772    3791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:53:53.518756    3791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:53:53.522836    3791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:53:53.523972    3791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:53:53.526769    3791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:53:53.530135    3791 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:53:53.530184    3791 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:53:53.534626    3791 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:53:53.541807    3791 start.go:297] selected driver: qemu2
	I0701 12:53:53.541829    3791 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:53:53.541838    3791 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:53:53.543814    3791 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:53:53.546778    3791 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:53:53.549916    3791 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:53:53.549934    3791 cni.go:84] Creating CNI manager for "kindnet"
	I0701 12:53:53.549938    3791 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 12:53:53.549943    3791 start_flags.go:319] config:
	{Name:kindnet-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0701 12:53:53.553867    3791 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:53:53.561767    3791 out.go:177] * Starting control plane node kindnet-674000 in cluster kindnet-674000
	I0701 12:53:53.565767    3791 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:53:53.565792    3791 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:53:53.565805    3791 cache.go:57] Caching tarball of preloaded images
	I0701 12:53:53.565885    3791 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:53:53.565894    3791 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:53:53.565971    3791 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kindnet-674000/config.json ...
	I0701 12:53:53.565983    3791 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kindnet-674000/config.json: {Name:mk70b7996bf920f4dee263273d17f8fee0d382fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:53:53.566194    3791 start.go:365] acquiring machines lock for kindnet-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:53:53.566225    3791 start.go:369] acquired machines lock for "kindnet-674000" in 24.958µs
	I0701 12:53:53.566237    3791 start.go:93] Provisioning new machine with config: &{Name:kindnet-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:53:53.566272    3791 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:53:53.574785    3791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:53:53.590545    3791 start.go:159] libmachine.API.Create for "kindnet-674000" (driver="qemu2")
	I0701 12:53:53.590574    3791 client.go:168] LocalClient.Create starting
	I0701 12:53:53.590648    3791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:53:53.590669    3791 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:53.590681    3791 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:53.590725    3791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:53:53.590739    3791 main.go:141] libmachine: Decoding PEM data...
	I0701 12:53:53.590749    3791 main.go:141] libmachine: Parsing certificate...
	I0701 12:53:53.591051    3791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:53:53.705431    3791 main.go:141] libmachine: Creating SSH key...
	I0701 12:53:53.958032    3791 main.go:141] libmachine: Creating Disk image...
	I0701 12:53:53.958040    3791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:53:53.958241    3791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:53:53.967574    3791 main.go:141] libmachine: STDOUT: 
	I0701 12:53:53.967585    3791 main.go:141] libmachine: STDERR: 
	I0701 12:53:53.967643    3791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2 +20000M
	I0701 12:53:53.974923    3791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:53:53.974938    3791 main.go:141] libmachine: STDERR: 
	I0701 12:53:53.974950    3791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:53:53.974957    3791 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:53:53.974994    3791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:e1:0b:cd:dd:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:53:53.976507    3791 main.go:141] libmachine: STDOUT: 
	I0701 12:53:53.976530    3791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:53:53.976546    3791 client.go:171] LocalClient.Create took 385.975375ms
	I0701 12:53:55.978663    3791 start.go:128] duration metric: createHost completed in 2.4124135s
	I0701 12:53:55.978734    3791 start.go:83] releasing machines lock for "kindnet-674000", held for 2.412544833s
	W0701 12:53:55.978796    3791 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:55.990085    3791 out.go:177] * Deleting "kindnet-674000" in qemu2 ...
	W0701 12:53:56.009244    3791 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:53:56.009270    3791 start.go:687] Will try again in 5 seconds ...
	I0701 12:54:01.011388    3791 start.go:365] acquiring machines lock for kindnet-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:01.011975    3791 start.go:369] acquired machines lock for "kindnet-674000" in 483.25µs
	I0701 12:54:01.012087    3791 start.go:93] Provisioning new machine with config: &{Name:kindnet-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:01.012520    3791 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:01.023306    3791 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:01.070449    3791 start.go:159] libmachine.API.Create for "kindnet-674000" (driver="qemu2")
	I0701 12:54:01.070482    3791 client.go:168] LocalClient.Create starting
	I0701 12:54:01.070606    3791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:01.070655    3791 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:01.070673    3791 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:01.070744    3791 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:01.070772    3791 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:01.070784    3791 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:01.071256    3791 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:01.194912    3791 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:01.258195    3791 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:01.258204    3791 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:01.258348    3791 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:54:01.266941    3791 main.go:141] libmachine: STDOUT: 
	I0701 12:54:01.266953    3791 main.go:141] libmachine: STDERR: 
	I0701 12:54:01.266998    3791 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2 +20000M
	I0701 12:54:01.274042    3791 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:01.274053    3791 main.go:141] libmachine: STDERR: 
	I0701 12:54:01.274068    3791 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:54:01.274075    3791 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:01.274112    3791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b9:6c:78:ea:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kindnet-674000/disk.qcow2
	I0701 12:54:01.275656    3791 main.go:141] libmachine: STDOUT: 
	I0701 12:54:01.275667    3791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:01.275679    3791 client.go:171] LocalClient.Create took 205.19675ms
	I0701 12:54:03.277789    3791 start.go:128] duration metric: createHost completed in 2.265288792s
	I0701 12:54:03.277994    3791 start.go:83] releasing machines lock for "kindnet-674000", held for 2.2660325s
	W0701 12:54:03.278289    3791 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:03.286296    3791 out.go:177] 
	W0701 12:54:03.290477    3791 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:54:03.290531    3791 out.go:239] * 
	* 
	W0701 12:54:03.293445    3791 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:54:03.302223    3791 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.903913042s)

                                                
                                                
-- stdout --
	* [flannel-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-674000 in cluster flannel-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:54:05.474535    3905 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:54:05.474680    3905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:05.474683    3905 out.go:309] Setting ErrFile to fd 2...
	I0701 12:54:05.474685    3905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:05.474772    3905 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:54:05.475856    3905 out.go:303] Setting JSON to false
	I0701 12:54:05.491222    3905 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1415,"bootTime":1688239830,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:54:05.491313    3905 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:54:05.496579    3905 out.go:177] * [flannel-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:54:05.502548    3905 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:54:05.506541    3905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:54:05.502611    3905 notify.go:220] Checking for updates...
	I0701 12:54:05.512489    3905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:54:05.515531    3905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:54:05.518502    3905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:54:05.521508    3905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:54:05.524776    3905 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:54:05.524815    3905 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:54:05.529534    3905 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:54:05.536483    3905 start.go:297] selected driver: qemu2
	I0701 12:54:05.536489    3905 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:54:05.536496    3905 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:54:05.538409    3905 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:54:05.541454    3905 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:54:05.544572    3905 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:54:05.544601    3905 cni.go:84] Creating CNI manager for "flannel"
	I0701 12:54:05.544605    3905 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0701 12:54:05.544612    3905 start_flags.go:319] config:
	{Name:flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0701 12:54:05.549028    3905 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:54:05.556458    3905 out.go:177] * Starting control plane node flannel-674000 in cluster flannel-674000
	I0701 12:54:05.560507    3905 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:54:05.560535    3905 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:54:05.560546    3905 cache.go:57] Caching tarball of preloaded images
	I0701 12:54:05.560629    3905 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:54:05.560635    3905 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:54:05.560695    3905 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/flannel-674000/config.json ...
	I0701 12:54:05.560708    3905 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/flannel-674000/config.json: {Name:mk079bde9313c7e5f900b2c2495f2235bb939bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:54:05.560914    3905 start.go:365] acquiring machines lock for flannel-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:05.560945    3905 start.go:369] acquired machines lock for "flannel-674000" in 25.292µs
	I0701 12:54:05.560957    3905 start.go:93] Provisioning new machine with config: &{Name:flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:05.560987    3905 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:05.569493    3905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:05.585512    3905 start.go:159] libmachine.API.Create for "flannel-674000" (driver="qemu2")
	I0701 12:54:05.585540    3905 client.go:168] LocalClient.Create starting
	I0701 12:54:05.585599    3905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:05.585623    3905 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:05.585633    3905 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:05.585673    3905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:05.585688    3905 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:05.585695    3905 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:05.586044    3905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:05.696849    3905 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:05.849327    3905 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:05.849338    3905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:05.849509    3905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:05.858464    3905 main.go:141] libmachine: STDOUT: 
	I0701 12:54:05.858477    3905 main.go:141] libmachine: STDERR: 
	I0701 12:54:05.858540    3905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2 +20000M
	I0701 12:54:05.865845    3905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:05.865865    3905 main.go:141] libmachine: STDERR: 
	I0701 12:54:05.865885    3905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:05.865891    3905 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:05.865928    3905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:21:e7:e3:c6:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:05.867440    3905 main.go:141] libmachine: STDOUT: 
	I0701 12:54:05.867453    3905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:05.867471    3905 client.go:171] LocalClient.Create took 281.931291ms
	I0701 12:54:07.869591    3905 start.go:128] duration metric: createHost completed in 2.308630292s
	I0701 12:54:07.869657    3905 start.go:83] releasing machines lock for "flannel-674000", held for 2.308746375s
	W0701 12:54:07.869718    3905 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:07.880890    3905 out.go:177] * Deleting "flannel-674000" in qemu2 ...
	W0701 12:54:07.900760    3905 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:07.900787    3905 start.go:687] Will try again in 5 seconds ...
	I0701 12:54:12.902952    3905 start.go:365] acquiring machines lock for flannel-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:12.903497    3905 start.go:369] acquired machines lock for "flannel-674000" in 419.958µs
	I0701 12:54:12.903659    3905 start.go:93] Provisioning new machine with config: &{Name:flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:12.903949    3905 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:12.914624    3905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:12.962448    3905 start.go:159] libmachine.API.Create for "flannel-674000" (driver="qemu2")
	I0701 12:54:12.962492    3905 client.go:168] LocalClient.Create starting
	I0701 12:54:12.962628    3905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:12.962692    3905 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:12.962710    3905 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:12.962798    3905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:12.962834    3905 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:12.962852    3905 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:12.963423    3905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:13.086871    3905 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:13.290716    3905 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:13.290724    3905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:13.290904    3905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:13.299972    3905 main.go:141] libmachine: STDOUT: 
	I0701 12:54:13.299999    3905 main.go:141] libmachine: STDERR: 
	I0701 12:54:13.300064    3905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2 +20000M
	I0701 12:54:13.307328    3905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:13.307342    3905 main.go:141] libmachine: STDERR: 
	I0701 12:54:13.307365    3905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:13.307372    3905 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:13.307418    3905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:7e:9f:c3:cc:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/flannel-674000/disk.qcow2
	I0701 12:54:13.308885    3905 main.go:141] libmachine: STDOUT: 
	I0701 12:54:13.308902    3905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:13.308918    3905 client.go:171] LocalClient.Create took 346.428625ms
	I0701 12:54:15.311105    3905 start.go:128] duration metric: createHost completed in 2.407161834s
	I0701 12:54:15.311175    3905 start.go:83] releasing machines lock for "flannel-674000", held for 2.407696375s
	W0701 12:54:15.311622    3905 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:15.321275    3905 out.go:177] 
	W0701 12:54:15.325323    3905 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:54:15.325350    3905 out.go:239] * 
	* 
	W0701 12:54:15.328034    3905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:54:15.338201    3905 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.812625417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-674000 in cluster enable-default-cni-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:54:17.626608    4028 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:54:17.626770    4028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:17.626774    4028 out.go:309] Setting ErrFile to fd 2...
	I0701 12:54:17.626776    4028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:17.626843    4028 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:54:17.627900    4028 out.go:303] Setting JSON to false
	I0701 12:54:17.643056    4028 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1427,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:54:17.643126    4028 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:54:17.647852    4028 out.go:177] * [enable-default-cni-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:54:17.654885    4028 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:54:17.658662    4028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:54:17.654915    4028 notify.go:220] Checking for updates...
	I0701 12:54:17.661898    4028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:54:17.664860    4028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:54:17.667862    4028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:54:17.670820    4028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:54:17.674166    4028 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:54:17.674214    4028 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:54:17.678888    4028 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:54:17.685793    4028 start.go:297] selected driver: qemu2
	I0701 12:54:17.685799    4028 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:54:17.685805    4028 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:54:17.687670    4028 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:54:17.690852    4028 out.go:177] * Automatically selected the socket_vmnet network
	E0701 12:54:17.693890    4028 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0701 12:54:17.693899    4028 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:54:17.693915    4028 cni.go:84] Creating CNI manager for "bridge"
	I0701 12:54:17.693919    4028 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:54:17.693928    4028 start_flags.go:319] config:
	{Name:enable-default-cni-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:54:17.697753    4028 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:54:17.704632    4028 out.go:177] * Starting control plane node enable-default-cni-674000 in cluster enable-default-cni-674000
	I0701 12:54:17.708832    4028 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:54:17.708866    4028 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:54:17.708880    4028 cache.go:57] Caching tarball of preloaded images
	I0701 12:54:17.708957    4028 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:54:17.708962    4028 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:54:17.709021    4028 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/enable-default-cni-674000/config.json ...
	I0701 12:54:17.709034    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/enable-default-cni-674000/config.json: {Name:mk5b9a4cbc2e7c693d9b241e7274bd8f03a160db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:54:17.709234    4028 start.go:365] acquiring machines lock for enable-default-cni-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:17.709265    4028 start.go:369] acquired machines lock for "enable-default-cni-674000" in 23.333µs
	I0701 12:54:17.709278    4028 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:17.709314    4028 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:17.717797    4028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:17.733379    4028 start.go:159] libmachine.API.Create for "enable-default-cni-674000" (driver="qemu2")
	I0701 12:54:17.733404    4028 client.go:168] LocalClient.Create starting
	I0701 12:54:17.733465    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:17.733491    4028 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:17.733502    4028 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:17.733551    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:17.733566    4028 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:17.733577    4028 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:17.733906    4028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:17.838981    4028 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:17.973108    4028 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:17.973114    4028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:17.973266    4028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:17.981803    4028 main.go:141] libmachine: STDOUT: 
	I0701 12:54:17.981817    4028 main.go:141] libmachine: STDERR: 
	I0701 12:54:17.981866    4028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2 +20000M
	I0701 12:54:17.989020    4028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:17.989035    4028 main.go:141] libmachine: STDERR: 
	I0701 12:54:17.989055    4028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:17.989063    4028 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:17.989095    4028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:56:cf:cf:16:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:17.990635    4028 main.go:141] libmachine: STDOUT: 
	I0701 12:54:17.990649    4028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:17.990667    4028 client.go:171] LocalClient.Create took 257.262375ms
	I0701 12:54:19.992815    4028 start.go:128] duration metric: createHost completed in 2.283529209s
	I0701 12:54:19.992930    4028 start.go:83] releasing machines lock for "enable-default-cni-674000", held for 2.283672375s
	W0701 12:54:19.992987    4028 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:20.003000    4028 out.go:177] * Deleting "enable-default-cni-674000" in qemu2 ...
	W0701 12:54:20.022240    4028 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:20.022266    4028 start.go:687] Will try again in 5 seconds ...
	I0701 12:54:25.024392    4028 start.go:365] acquiring machines lock for enable-default-cni-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:25.024915    4028 start.go:369] acquired machines lock for "enable-default-cni-674000" in 388.917µs
	I0701 12:54:25.025060    4028 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:25.025391    4028 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:25.036191    4028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:25.082450    4028 start.go:159] libmachine.API.Create for "enable-default-cni-674000" (driver="qemu2")
	I0701 12:54:25.082491    4028 client.go:168] LocalClient.Create starting
	I0701 12:54:25.082615    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:25.082667    4028 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:25.082684    4028 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:25.082773    4028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:25.082809    4028 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:25.082825    4028 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:25.083324    4028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:25.219587    4028 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:25.351908    4028 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:25.351914    4028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:25.352069    4028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:25.360812    4028 main.go:141] libmachine: STDOUT: 
	I0701 12:54:25.360827    4028 main.go:141] libmachine: STDERR: 
	I0701 12:54:25.360893    4028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2 +20000M
	I0701 12:54:25.368031    4028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:25.368044    4028 main.go:141] libmachine: STDERR: 
	I0701 12:54:25.368055    4028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:25.368062    4028 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:25.368096    4028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:51:a4:72:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/enable-default-cni-674000/disk.qcow2
	I0701 12:54:25.369538    4028 main.go:141] libmachine: STDOUT: 
	I0701 12:54:25.369549    4028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:25.369562    4028 client.go:171] LocalClient.Create took 287.069375ms
	I0701 12:54:27.371714    4028 start.go:128] duration metric: createHost completed in 2.346336792s
	I0701 12:54:27.371770    4028 start.go:83] releasing machines lock for "enable-default-cni-674000", held for 2.346873042s
	W0701 12:54:27.372156    4028 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:27.381753    4028 out.go:177] 
	W0701 12:54:27.385769    4028 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:54:27.385825    4028 out.go:239] * 
	* 
	W0701 12:54:27.388762    4028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:54:27.398775    4028 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.666716167s)

                                                
                                                
-- stdout --
	* [bridge-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-674000 in cluster bridge-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:54:29.534071    4142 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:54:29.534213    4142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:29.534215    4142 out.go:309] Setting ErrFile to fd 2...
	I0701 12:54:29.534218    4142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:29.534292    4142 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:54:29.535309    4142 out.go:303] Setting JSON to false
	I0701 12:54:29.550724    4142 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1439,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:54:29.550789    4142 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:54:29.554901    4142 out.go:177] * [bridge-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:54:29.561998    4142 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:54:29.565938    4142 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:54:29.562051    4142 notify.go:220] Checking for updates...
	I0701 12:54:29.571981    4142 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:54:29.574988    4142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:54:29.578005    4142 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:54:29.580997    4142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:54:29.582624    4142 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:54:29.582660    4142 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:54:29.586926    4142 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:54:29.593765    4142 start.go:297] selected driver: qemu2
	I0701 12:54:29.593772    4142 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:54:29.593782    4142 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:54:29.595649    4142 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:54:29.599001    4142 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:54:29.602050    4142 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:54:29.602067    4142 cni.go:84] Creating CNI manager for "bridge"
	I0701 12:54:29.602071    4142 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:54:29.602219    4142 start_flags.go:319] config:
	{Name:bridge-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0701 12:54:29.607161    4142 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:54:29.613953    4142 out.go:177] * Starting control plane node bridge-674000 in cluster bridge-674000
	I0701 12:54:29.617951    4142 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:54:29.617969    4142 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:54:29.617978    4142 cache.go:57] Caching tarball of preloaded images
	I0701 12:54:29.618033    4142 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:54:29.618037    4142 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:54:29.618086    4142 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/bridge-674000/config.json ...
	I0701 12:54:29.618098    4142 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/bridge-674000/config.json: {Name:mke7ab2c7d3796a2ca1469987b31b08dc7c37185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:54:29.618291    4142 start.go:365] acquiring machines lock for bridge-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:29.618317    4142 start.go:369] acquired machines lock for "bridge-674000" in 21.125µs
	I0701 12:54:29.618329    4142 start.go:93] Provisioning new machine with config: &{Name:bridge-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:29.618359    4142 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:29.627024    4142 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:29.642134    4142 start.go:159] libmachine.API.Create for "bridge-674000" (driver="qemu2")
	I0701 12:54:29.642152    4142 client.go:168] LocalClient.Create starting
	I0701 12:54:29.642206    4142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:29.642224    4142 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:29.642236    4142 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:29.642284    4142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:29.642297    4142 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:29.642307    4142 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:29.642601    4142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:29.753056    4142 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:29.783095    4142 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:29.783101    4142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:29.783232    4142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:29.791732    4142 main.go:141] libmachine: STDOUT: 
	I0701 12:54:29.791747    4142 main.go:141] libmachine: STDERR: 
	I0701 12:54:29.791792    4142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2 +20000M
	I0701 12:54:29.798801    4142 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:29.798818    4142 main.go:141] libmachine: STDERR: 
	I0701 12:54:29.798837    4142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:29.798844    4142 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:29.798882    4142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:7e:3c:2e:2d:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:29.800358    4142 main.go:141] libmachine: STDOUT: 
	I0701 12:54:29.800371    4142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:29.800390    4142 client.go:171] LocalClient.Create took 158.236625ms
	I0701 12:54:31.802534    4142 start.go:128] duration metric: createHost completed in 2.1841905s
	I0701 12:54:31.802693    4142 start.go:83] releasing machines lock for "bridge-674000", held for 2.18432425s
	W0701 12:54:31.802786    4142 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:31.815177    4142 out.go:177] * Deleting "bridge-674000" in qemu2 ...
	W0701 12:54:31.833954    4142 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:31.833981    4142 start.go:687] Will try again in 5 seconds ...
	I0701 12:54:36.836073    4142 start.go:365] acquiring machines lock for bridge-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:36.836457    4142 start.go:369] acquired machines lock for "bridge-674000" in 283.208µs
	I0701 12:54:36.836566    4142 start.go:93] Provisioning new machine with config: &{Name:bridge-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:36.836818    4142 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:36.848167    4142 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:36.893550    4142 start.go:159] libmachine.API.Create for "bridge-674000" (driver="qemu2")
	I0701 12:54:36.893587    4142 client.go:168] LocalClient.Create starting
	I0701 12:54:36.893730    4142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:36.893772    4142 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:36.893791    4142 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:36.893885    4142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:36.893917    4142 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:36.893936    4142 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:36.894449    4142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:37.020334    4142 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:37.113930    4142 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:37.113936    4142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:37.114097    4142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:37.122494    4142 main.go:141] libmachine: STDOUT: 
	I0701 12:54:37.122507    4142 main.go:141] libmachine: STDERR: 
	I0701 12:54:37.122564    4142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2 +20000M
	I0701 12:54:37.129700    4142 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:37.129713    4142 main.go:141] libmachine: STDERR: 
	I0701 12:54:37.129733    4142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:37.129740    4142 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:37.129778    4142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:48:f0:aa:19:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/bridge-674000/disk.qcow2
	I0701 12:54:37.131215    4142 main.go:141] libmachine: STDOUT: 
	I0701 12:54:37.131226    4142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:37.131239    4142 client.go:171] LocalClient.Create took 237.652375ms
	I0701 12:54:39.133363    4142 start.go:128] duration metric: createHost completed in 2.296557209s
	I0701 12:54:39.133458    4142 start.go:83] releasing machines lock for "bridge-674000", held for 2.296993916s
	W0701 12:54:39.133861    4142 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:39.144599    4142 out.go:177] 
	W0701 12:54:39.148658    4142 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:54:39.148680    4142 out.go:239] * 
	* 
	W0701 12:54:39.151271    4142 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:54:39.160403    4142 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.666565542s)

                                                
                                                
-- stdout --
	* [kubenet-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-674000 in cluster kubenet-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:54:41.283025    4256 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:54:41.283156    4256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:41.283158    4256 out.go:309] Setting ErrFile to fd 2...
	I0701 12:54:41.283161    4256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:41.283241    4256 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:54:41.284323    4256 out.go:303] Setting JSON to false
	I0701 12:54:41.299388    4256 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1451,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:54:41.299687    4256 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:54:41.304790    4256 out.go:177] * [kubenet-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:54:41.311687    4256 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:54:41.315714    4256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:54:41.311719    4256 notify.go:220] Checking for updates...
	I0701 12:54:41.321657    4256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:54:41.324753    4256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:54:41.327796    4256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:54:41.330771    4256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:54:41.334077    4256 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:54:41.334120    4256 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:54:41.338804    4256 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:54:41.345700    4256 start.go:297] selected driver: qemu2
	I0701 12:54:41.345706    4256 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:54:41.345711    4256 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:54:41.347600    4256 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:54:41.350788    4256 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:54:41.352304    4256 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:54:41.352328    4256 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0701 12:54:41.352338    4256 start_flags.go:319] config:
	{Name:kubenet-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0701 12:54:41.356339    4256 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:54:41.363759    4256 out.go:177] * Starting control plane node kubenet-674000 in cluster kubenet-674000
	I0701 12:54:41.367656    4256 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:54:41.367676    4256 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:54:41.367686    4256 cache.go:57] Caching tarball of preloaded images
	I0701 12:54:41.367733    4256 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:54:41.367738    4256 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:54:41.367799    4256 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kubenet-674000/config.json ...
	I0701 12:54:41.367812    4256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/kubenet-674000/config.json: {Name:mkba40b288239f7075226a61a8b5edeb0e1ee63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:54:41.368012    4256 start.go:365] acquiring machines lock for kubenet-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:41.368041    4256 start.go:369] acquired machines lock for "kubenet-674000" in 23.375µs
	I0701 12:54:41.368053    4256 start.go:93] Provisioning new machine with config: &{Name:kubenet-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:41.368084    4256 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:41.376779    4256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:41.392259    4256 start.go:159] libmachine.API.Create for "kubenet-674000" (driver="qemu2")
	I0701 12:54:41.392277    4256 client.go:168] LocalClient.Create starting
	I0701 12:54:41.392335    4256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:41.392354    4256 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:41.392365    4256 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:41.392405    4256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:41.392423    4256 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:41.392434    4256 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:41.392742    4256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:41.499908    4256 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:41.528978    4256 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:41.528983    4256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:41.529132    4256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:41.537491    4256 main.go:141] libmachine: STDOUT: 
	I0701 12:54:41.537505    4256 main.go:141] libmachine: STDERR: 
	I0701 12:54:41.537554    4256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2 +20000M
	I0701 12:54:41.544575    4256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:41.544591    4256 main.go:141] libmachine: STDERR: 
	I0701 12:54:41.544608    4256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:41.544612    4256 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:41.544651    4256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:6f:d2:12:cd:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:41.546160    4256 main.go:141] libmachine: STDOUT: 
	I0701 12:54:41.546173    4256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:41.546190    4256 client.go:171] LocalClient.Create took 153.91125ms
	I0701 12:54:43.548309    4256 start.go:128] duration metric: createHost completed in 2.180248083s
	I0701 12:54:43.548384    4256 start.go:83] releasing machines lock for "kubenet-674000", held for 2.180374541s
	W0701 12:54:43.548480    4256 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:43.558721    4256 out.go:177] * Deleting "kubenet-674000" in qemu2 ...
	W0701 12:54:43.577651    4256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:43.577671    4256 start.go:687] Will try again in 5 seconds ...
	I0701 12:54:48.579799    4256 start.go:365] acquiring machines lock for kubenet-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:48.580264    4256 start.go:369] acquired machines lock for "kubenet-674000" in 392.458µs
	I0701 12:54:48.580375    4256 start.go:93] Provisioning new machine with config: &{Name:kubenet-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:48.580751    4256 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:48.590228    4256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:48.636596    4256 start.go:159] libmachine.API.Create for "kubenet-674000" (driver="qemu2")
	I0701 12:54:48.636655    4256 client.go:168] LocalClient.Create starting
	I0701 12:54:48.636801    4256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:48.636850    4256 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:48.636872    4256 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:48.636959    4256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:48.636998    4256 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:48.637012    4256 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:48.637545    4256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:48.765652    4256 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:48.864068    4256 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:48.864074    4256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:48.864224    4256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:48.872614    4256 main.go:141] libmachine: STDOUT: 
	I0701 12:54:48.872628    4256 main.go:141] libmachine: STDERR: 
	I0701 12:54:48.872681    4256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2 +20000M
	I0701 12:54:48.879741    4256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:48.879752    4256 main.go:141] libmachine: STDERR: 
	I0701 12:54:48.879772    4256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:48.879779    4256 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:48.879812    4256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:46:b2:2b:ac:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/kubenet-674000/disk.qcow2
	I0701 12:54:48.881285    4256 main.go:141] libmachine: STDOUT: 
	I0701 12:54:48.881297    4256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:48.881309    4256 client.go:171] LocalClient.Create took 244.649291ms
	I0701 12:54:50.883447    4256 start.go:128] duration metric: createHost completed in 2.302712625s
	I0701 12:54:50.883545    4256 start.go:83] releasing machines lock for "kubenet-674000", held for 2.303282666s
	W0701 12:54:50.883995    4256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:50.893791    4256 out.go:177] 
	W0701 12:54:50.897654    4256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:54:50.897680    4256 out.go:239] * 
	* 
	W0701 12:54:50.900690    4256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:54:50.909764    4256 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.838224208s)

                                                
                                                
-- stdout --
	* [custom-flannel-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-674000 in cluster custom-flannel-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:54:53.036029    4366 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:54:53.036171    4366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:53.036178    4366 out.go:309] Setting ErrFile to fd 2...
	I0701 12:54:53.036181    4366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:54:53.036252    4366 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:54:53.037243    4366 out.go:303] Setting JSON to false
	I0701 12:54:53.052472    4366 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1463,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:54:53.052535    4366 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:54:53.061443    4366 out.go:177] * [custom-flannel-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:54:53.065418    4366 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:54:53.065492    4366 notify.go:220] Checking for updates...
	I0701 12:54:53.069378    4366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:54:53.072488    4366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:54:53.075425    4366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:54:53.079471    4366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:54:53.082504    4366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:54:53.085740    4366 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:54:53.085779    4366 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:54:53.090458    4366 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:54:53.097438    4366 start.go:297] selected driver: qemu2
	I0701 12:54:53.097445    4366 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:54:53.097451    4366 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:54:53.099507    4366 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:54:53.102442    4366 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:54:53.105515    4366 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:54:53.105531    4366 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0701 12:54:53.105545    4366 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0701 12:54:53.105552    4366 start_flags.go:319] config:
	{Name:custom-flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:54:53.109685    4366 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:54:53.116424    4366 out.go:177] * Starting control plane node custom-flannel-674000 in cluster custom-flannel-674000
	I0701 12:54:53.120220    4366 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:54:53.120243    4366 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:54:53.120257    4366 cache.go:57] Caching tarball of preloaded images
	I0701 12:54:53.120317    4366 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:54:53.120322    4366 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:54:53.120379    4366 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/custom-flannel-674000/config.json ...
	I0701 12:54:53.120391    4366 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/custom-flannel-674000/config.json: {Name:mk20c1dc9be8bd71ae72dd7a5bf51b5c295de750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:54:53.120590    4366 start.go:365] acquiring machines lock for custom-flannel-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:54:53.120621    4366 start.go:369] acquired machines lock for "custom-flannel-674000" in 25.833µs
	I0701 12:54:53.120634    4366 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:54:53.120660    4366 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:54:53.128422    4366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:54:53.144113    4366 start.go:159] libmachine.API.Create for "custom-flannel-674000" (driver="qemu2")
	I0701 12:54:53.144143    4366 client.go:168] LocalClient.Create starting
	I0701 12:54:53.144194    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:54:53.144216    4366 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:53.144224    4366 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:53.144268    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:54:53.144282    4366 main.go:141] libmachine: Decoding PEM data...
	I0701 12:54:53.144291    4366 main.go:141] libmachine: Parsing certificate...
	I0701 12:54:53.144628    4366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:54:53.258875    4366 main.go:141] libmachine: Creating SSH key...
	I0701 12:54:53.371296    4366 main.go:141] libmachine: Creating Disk image...
	I0701 12:54:53.371303    4366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:54:53.371451    4366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:54:53.379881    4366 main.go:141] libmachine: STDOUT: 
	I0701 12:54:53.379900    4366 main.go:141] libmachine: STDERR: 
	I0701 12:54:53.379970    4366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2 +20000M
	I0701 12:54:53.387102    4366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:54:53.387113    4366 main.go:141] libmachine: STDERR: 
	I0701 12:54:53.387127    4366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:54:53.387137    4366 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:54:53.387177    4366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:70:81:6e:2b:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:54:53.388638    4366 main.go:141] libmachine: STDOUT: 
	I0701 12:54:53.388652    4366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:54:53.388672    4366 client.go:171] LocalClient.Create took 244.528916ms
	I0701 12:54:55.390795    4366 start.go:128] duration metric: createHost completed in 2.270160208s
	I0701 12:54:55.390899    4366 start.go:83] releasing machines lock for "custom-flannel-674000", held for 2.270278625s
	W0701 12:54:55.390973    4366 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:55.401274    4366 out.go:177] * Deleting "custom-flannel-674000" in qemu2 ...
	W0701 12:54:55.420354    4366 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:54:55.420385    4366 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:00.422513    4366 start.go:365] acquiring machines lock for custom-flannel-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:00.423059    4366 start.go:369] acquired machines lock for "custom-flannel-674000" in 426.458µs
	I0701 12:55:00.423179    4366 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:00.423496    4366 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:00.433326    4366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:55:00.480762    4366 start.go:159] libmachine.API.Create for "custom-flannel-674000" (driver="qemu2")
	I0701 12:55:00.480798    4366 client.go:168] LocalClient.Create starting
	I0701 12:55:00.480922    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:00.480973    4366 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:00.480999    4366 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:00.481085    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:00.481114    4366 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:00.481127    4366 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:00.481691    4366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:00.606850    4366 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:00.789422    4366 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:00.789429    4366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:00.789581    4366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:55:00.798185    4366 main.go:141] libmachine: STDOUT: 
	I0701 12:55:00.798198    4366 main.go:141] libmachine: STDERR: 
	I0701 12:55:00.798252    4366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2 +20000M
	I0701 12:55:00.805189    4366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:00.805200    4366 main.go:141] libmachine: STDERR: 
	I0701 12:55:00.805214    4366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:55:00.805219    4366 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:00.805258    4366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e3:3e:b5:6d:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/custom-flannel-674000/disk.qcow2
	I0701 12:55:00.806603    4366 main.go:141] libmachine: STDOUT: 
	I0701 12:55:00.806615    4366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:00.806637    4366 client.go:171] LocalClient.Create took 325.8315ms
	I0701 12:55:02.808809    4366 start.go:128] duration metric: createHost completed in 2.385325583s
	I0701 12:55:02.808888    4366 start.go:83] releasing machines lock for "custom-flannel-674000", held for 2.385845625s
	W0701 12:55:02.809262    4366 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:02.817663    4366 out.go:177] 
	W0701 12:55:02.822121    4366 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:02.822145    4366 out.go:239] * 
	* 
	W0701 12:55:02.824495    4366 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:02.835022    4366 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.689419541s)

                                                
                                                
-- stdout --
	* [calico-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-674000 in cluster calico-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:05.134314    4484 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:05.134444    4484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:05.134447    4484 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:05.134449    4484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:05.134514    4484 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:05.135585    4484 out.go:303] Setting JSON to false
	I0701 12:55:05.150841    4484 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1475,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:05.150905    4484 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:05.155780    4484 out.go:177] * [calico-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:05.162706    4484 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:05.166773    4484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:05.162768    4484 notify.go:220] Checking for updates...
	I0701 12:55:05.172669    4484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:05.175735    4484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:05.178607    4484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:05.181696    4484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:05.185032    4484 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:05.185072    4484 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:05.188626    4484 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:05.195656    4484 start.go:297] selected driver: qemu2
	I0701 12:55:05.195662    4484 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:05.195668    4484 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:05.197665    4484 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:05.199280    4484 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:05.201759    4484 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:05.201785    4484 cni.go:84] Creating CNI manager for "calico"
	I0701 12:55:05.201790    4484 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0701 12:55:05.201800    4484 start_flags.go:319] config:
	{Name:calico-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0701 12:55:05.206234    4484 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:05.213684    4484 out.go:177] * Starting control plane node calico-674000 in cluster calico-674000
	I0701 12:55:05.217700    4484 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:05.217724    4484 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:55:05.217737    4484 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:05.217795    4484 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:05.217801    4484 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:55:05.217870    4484 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/calico-674000/config.json ...
	I0701 12:55:05.217883    4484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/calico-674000/config.json: {Name:mk70597e7efb3917092400273840aa72050ad74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:05.218087    4484 start.go:365] acquiring machines lock for calico-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:05.218115    4484 start.go:369] acquired machines lock for "calico-674000" in 23µs
	I0701 12:55:05.218126    4484 start.go:93] Provisioning new machine with config: &{Name:calico-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:05.218158    4484 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:05.226649    4484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:55:05.242494    4484 start.go:159] libmachine.API.Create for "calico-674000" (driver="qemu2")
	I0701 12:55:05.242513    4484 client.go:168] LocalClient.Create starting
	I0701 12:55:05.242581    4484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:05.242602    4484 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:05.242613    4484 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:05.242658    4484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:05.242672    4484 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:05.242680    4484 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:05.243000    4484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:05.357634    4484 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:05.439720    4484 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:05.439729    4484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:05.439889    4484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:05.448366    4484 main.go:141] libmachine: STDOUT: 
	I0701 12:55:05.448379    4484 main.go:141] libmachine: STDERR: 
	I0701 12:55:05.448427    4484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2 +20000M
	I0701 12:55:05.455500    4484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:05.455512    4484 main.go:141] libmachine: STDERR: 
	I0701 12:55:05.455534    4484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:05.455541    4484 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:05.455576    4484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b7:32:01:98:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:05.457130    4484 main.go:141] libmachine: STDOUT: 
	I0701 12:55:05.457142    4484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:05.457165    4484 client.go:171] LocalClient.Create took 214.647334ms
	I0701 12:55:07.459345    4484 start.go:128] duration metric: createHost completed in 2.241211333s
	I0701 12:55:07.459396    4484 start.go:83] releasing machines lock for "calico-674000", held for 2.241314958s
	W0701 12:55:07.459451    4484 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:07.470645    4484 out.go:177] * Deleting "calico-674000" in qemu2 ...
	W0701 12:55:07.488992    4484 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:07.489023    4484 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:12.491188    4484 start.go:365] acquiring machines lock for calico-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:12.491726    4484 start.go:369] acquired machines lock for "calico-674000" in 423.125µs
	I0701 12:55:12.491844    4484 start.go:93] Provisioning new machine with config: &{Name:calico-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:12.492141    4484 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:12.501973    4484 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:55:12.549703    4484 start.go:159] libmachine.API.Create for "calico-674000" (driver="qemu2")
	I0701 12:55:12.549745    4484 client.go:168] LocalClient.Create starting
	I0701 12:55:12.549877    4484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:12.549921    4484 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:12.549939    4484 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:12.550017    4484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:12.550045    4484 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:12.550060    4484 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:12.550517    4484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:12.672456    4484 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:12.736715    4484 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:12.736720    4484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:12.736856    4484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:12.745281    4484 main.go:141] libmachine: STDOUT: 
	I0701 12:55:12.745296    4484 main.go:141] libmachine: STDERR: 
	I0701 12:55:12.745349    4484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2 +20000M
	I0701 12:55:12.752402    4484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:12.752415    4484 main.go:141] libmachine: STDERR: 
	I0701 12:55:12.752432    4484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:12.752437    4484 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:12.752477    4484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6b:52:09:96:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/calico-674000/disk.qcow2
	I0701 12:55:12.753941    4484 main.go:141] libmachine: STDOUT: 
	I0701 12:55:12.753957    4484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:12.753968    4484 client.go:171] LocalClient.Create took 204.22125ms
	I0701 12:55:14.756131    4484 start.go:128] duration metric: createHost completed in 2.263998333s
	I0701 12:55:14.756245    4484 start.go:83] releasing machines lock for "calico-674000", held for 2.264505958s
	W0701 12:55:14.756713    4484 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:14.766372    4484 out.go:177] 
	W0701 12:55:14.770540    4484 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:14.770568    4484 out.go:239] * 
	* 
	W0701 12:55:14.773343    4484 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:14.783483    4484 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-674000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.774705541s)

                                                
                                                
-- stdout --
	* [false-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-674000 in cluster false-674000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:17.090956    4604 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:17.091075    4604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:17.091078    4604 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:17.091080    4604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:17.091144    4604 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:17.092109    4604 out.go:303] Setting JSON to false
	I0701 12:55:17.107912    4604 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1487,"bootTime":1688239830,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:17.107979    4604 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:17.113014    4604 out.go:177] * [false-674000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:17.120175    4604 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:17.120225    4604 notify.go:220] Checking for updates...
	I0701 12:55:17.123113    4604 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:17.127118    4604 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:17.130161    4604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:17.134017    4604 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:17.137125    4604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:17.140521    4604 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:17.140571    4604 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:17.144997    4604 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:17.152123    4604 start.go:297] selected driver: qemu2
	I0701 12:55:17.152130    4604 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:17.152138    4604 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:17.154088    4604 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:17.157096    4604 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:17.160184    4604 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:17.160200    4604 cni.go:84] Creating CNI manager for "false"
	I0701 12:55:17.160204    4604 start_flags.go:319] config:
	{Name:false-674000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0}
	I0701 12:55:17.164302    4604 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:17.172150    4604 out.go:177] * Starting control plane node false-674000 in cluster false-674000
	I0701 12:55:17.176168    4604 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:17.176193    4604 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:55:17.176204    4604 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:17.176273    4604 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:17.176286    4604 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:55:17.176351    4604 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/false-674000/config.json ...
	I0701 12:55:17.176370    4604 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/false-674000/config.json: {Name:mk5e3f3cce344385480dcbc0f7229cfd337768d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:17.176568    4604 start.go:365] acquiring machines lock for false-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:17.176597    4604 start.go:369] acquired machines lock for "false-674000" in 23.5µs
	I0701 12:55:17.176609    4604 start.go:93] Provisioning new machine with config: &{Name:false-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:17.176641    4604 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:17.185154    4604 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:55:17.200938    4604 start.go:159] libmachine.API.Create for "false-674000" (driver="qemu2")
	I0701 12:55:17.200964    4604 client.go:168] LocalClient.Create starting
	I0701 12:55:17.201025    4604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:17.201054    4604 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:17.201064    4604 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:17.201113    4604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:17.201129    4604 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:17.201138    4604 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:17.201472    4604 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:17.312263    4604 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:17.422696    4604 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:17.422702    4604 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:17.422853    4604 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:17.431307    4604 main.go:141] libmachine: STDOUT: 
	I0701 12:55:17.431326    4604 main.go:141] libmachine: STDERR: 
	I0701 12:55:17.431384    4604 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2 +20000M
	I0701 12:55:17.438428    4604 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:17.438441    4604 main.go:141] libmachine: STDERR: 
	I0701 12:55:17.438455    4604 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:17.438461    4604 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:17.438492    4604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2d:1e:eb:ae:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:17.439953    4604 main.go:141] libmachine: STDOUT: 
	I0701 12:55:17.439966    4604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:17.439985    4604 client.go:171] LocalClient.Create took 239.017125ms
	I0701 12:55:19.442153    4604 start.go:128] duration metric: createHost completed in 2.265532875s
	I0701 12:55:19.442212    4604 start.go:83] releasing machines lock for "false-674000", held for 2.265648833s
	W0701 12:55:19.442274    4604 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:19.452664    4604 out.go:177] * Deleting "false-674000" in qemu2 ...
	W0701 12:55:19.474758    4604 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:19.474789    4604 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:24.477024    4604 start.go:365] acquiring machines lock for false-674000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:24.477540    4604 start.go:369] acquired machines lock for "false-674000" in 424.458µs
	I0701 12:55:24.477649    4604 start.go:93] Provisioning new machine with config: &{Name:false-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-674000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:24.478004    4604 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:24.486617    4604 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0701 12:55:24.534433    4604 start.go:159] libmachine.API.Create for "false-674000" (driver="qemu2")
	I0701 12:55:24.534492    4604 client.go:168] LocalClient.Create starting
	I0701 12:55:24.534606    4604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:24.534645    4604 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:24.534663    4604 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:24.534744    4604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:24.534770    4604 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:24.534782    4604 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:24.535302    4604 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:24.655130    4604 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:24.779411    4604 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:24.779417    4604 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:24.779555    4604 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:24.788295    4604 main.go:141] libmachine: STDOUT: 
	I0701 12:55:24.788312    4604 main.go:141] libmachine: STDERR: 
	I0701 12:55:24.788366    4604 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2 +20000M
	I0701 12:55:24.795412    4604 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:24.795425    4604 main.go:141] libmachine: STDERR: 
	I0701 12:55:24.795439    4604 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:24.795444    4604 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:24.795478    4604 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:3c:13:b3:60:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/false-674000/disk.qcow2
	I0701 12:55:24.797017    4604 main.go:141] libmachine: STDOUT: 
	I0701 12:55:24.797029    4604 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:24.797041    4604 client.go:171] LocalClient.Create took 262.547417ms
	I0701 12:55:26.799163    4604 start.go:128] duration metric: createHost completed in 2.321175208s
	I0701 12:55:26.799223    4604 start.go:83] releasing machines lock for "false-674000", held for 2.321701542s
	W0701 12:55:26.799641    4604 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:26.809428    4604 out.go:177] 
	W0701 12:55:26.813474    4604 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:26.813500    4604 out.go:239] * 
	* 
	W0701 12:55:26.816335    4604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:26.825339    4604 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.777838125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-326000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-326000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:28.920174    4714 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:28.920296    4714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:28.920299    4714 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:28.920301    4714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:28.920375    4714 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:28.921439    4714 out.go:303] Setting JSON to false
	I0701 12:55:28.936795    4714 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1498,"bootTime":1688239830,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:28.936882    4714 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:28.940705    4714 out.go:177] * [old-k8s-version-326000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:28.947601    4714 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:28.951593    4714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:28.947671    4714 notify.go:220] Checking for updates...
	I0701 12:55:28.955513    4714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:28.958612    4714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:28.961576    4714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:28.964565    4714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:28.967892    4714 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:28.967939    4714 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:28.972640    4714 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:28.979570    4714 start.go:297] selected driver: qemu2
	I0701 12:55:28.979576    4714 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:28.979587    4714 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:28.981533    4714 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:28.989563    4714 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:28.992699    4714 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:28.992719    4714 cni.go:84] Creating CNI manager for ""
	I0701 12:55:28.992727    4714 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:55:28.992732    4714 start_flags.go:319] config:
	{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0}
	I0701 12:55:28.997015    4714 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:29.003474    4714 out.go:177] * Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	I0701 12:55:29.007603    4714 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:55:29.007631    4714 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:55:29.007647    4714 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:29.007711    4714 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:29.007723    4714 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0701 12:55:29.007780    4714 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0701 12:55:29.007794    4714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/old-k8s-version-326000/config.json: {Name:mk4803499b6a52aa2e0448a44c54bbf9581d0145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:29.007994    4714 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:29.008026    4714 start.go:369] acquired machines lock for "old-k8s-version-326000" in 24.166µs
	I0701 12:55:29.008038    4714 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:29.008069    4714 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:29.015536    4714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:29.031525    4714 start.go:159] libmachine.API.Create for "old-k8s-version-326000" (driver="qemu2")
	I0701 12:55:29.031546    4714 client.go:168] LocalClient.Create starting
	I0701 12:55:29.031604    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:29.031623    4714 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:29.031635    4714 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:29.031666    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:29.031681    4714 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:29.031690    4714 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:29.031999    4714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:29.150519    4714 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:29.281900    4714 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:29.281911    4714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:29.282054    4714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:29.290501    4714 main.go:141] libmachine: STDOUT: 
	I0701 12:55:29.290516    4714 main.go:141] libmachine: STDERR: 
	I0701 12:55:29.290561    4714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2 +20000M
	I0701 12:55:29.297703    4714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:29.297728    4714 main.go:141] libmachine: STDERR: 
	I0701 12:55:29.297750    4714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:29.297764    4714 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:29.297816    4714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:0d:00:8a:f3:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:29.299319    4714 main.go:141] libmachine: STDOUT: 
	I0701 12:55:29.299332    4714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:29.299349    4714 client.go:171] LocalClient.Create took 267.803375ms
	I0701 12:55:31.301505    4714 start.go:128] duration metric: createHost completed in 2.293450875s
	I0701 12:55:31.301596    4714 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 2.293603292s
	W0701 12:55:31.301730    4714 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:31.310250    4714 out.go:177] * Deleting "old-k8s-version-326000" in qemu2 ...
	W0701 12:55:31.333382    4714 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:31.333415    4714 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:36.335659    4714 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:36.336308    4714 start.go:369] acquired machines lock for "old-k8s-version-326000" in 533.167µs
	I0701 12:55:36.336440    4714 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:36.336715    4714 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:36.345334    4714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:36.392656    4714 start.go:159] libmachine.API.Create for "old-k8s-version-326000" (driver="qemu2")
	I0701 12:55:36.392700    4714 client.go:168] LocalClient.Create starting
	I0701 12:55:36.392843    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:36.392885    4714 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:36.392905    4714 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:36.392987    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:36.393014    4714 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:36.393025    4714 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:36.393529    4714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:36.517859    4714 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:36.615888    4714 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:36.615894    4714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:36.616048    4714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:36.624886    4714 main.go:141] libmachine: STDOUT: 
	I0701 12:55:36.624904    4714 main.go:141] libmachine: STDERR: 
	I0701 12:55:36.624963    4714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2 +20000M
	I0701 12:55:36.632039    4714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:36.632051    4714 main.go:141] libmachine: STDERR: 
	I0701 12:55:36.632063    4714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:36.632068    4714 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:36.632110    4714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e0:cb:15:91:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:36.633639    4714 main.go:141] libmachine: STDOUT: 
	I0701 12:55:36.633651    4714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:36.633662    4714 client.go:171] LocalClient.Create took 240.958625ms
	I0701 12:55:38.635816    4714 start.go:128] duration metric: createHost completed in 2.299108334s
	I0701 12:55:38.635897    4714 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 2.299600625s
	W0701 12:55:38.636365    4714 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:38.644003    4714 out.go:177] 
	W0701 12:55:38.648158    4714 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:38.648187    4714 out.go:239] * 
	* 
	W0701 12:55:38.650443    4714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:38.658971    4714 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (67.487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml: exit status 1 (28.774375ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-326000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (28.31575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (27.079625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-326000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system: exit status 1 (25.207916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-326000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-326000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (27.876666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.182578209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-326000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	* Restarting existing qemu2 VM for "old-k8s-version-326000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-326000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:39.111207    4752 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:39.111315    4752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:39.111318    4752 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:39.111320    4752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:39.111390    4752 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:39.112344    4752 out.go:303] Setting JSON to false
	I0701 12:55:39.127534    4752 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1509,"bootTime":1688239830,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:39.127604    4752 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:39.132519    4752 out.go:177] * [old-k8s-version-326000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:39.139676    4752 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:39.139731    4752 notify.go:220] Checking for updates...
	I0701 12:55:39.142631    4752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:39.145656    4752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:39.148644    4752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:39.150031    4752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:39.153632    4752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:39.156914    4752 config.go:182] Loaded profile config "old-k8s-version-326000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0701 12:55:39.160652    4752 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0701 12:55:39.163622    4752 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:39.167641    4752 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:55:39.174607    4752 start.go:297] selected driver: qemu2
	I0701 12:55:39.174613    4752 start.go:944] validating driver "qemu2" against &{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:39.174676    4752 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:39.176568    4752 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:39.176592    4752 cni.go:84] Creating CNI manager for ""
	I0701 12:55:39.176600    4752 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:55:39.176605    4752 start_flags.go:319] config:
	{Name:old-k8s-version-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-326000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:39.180621    4752 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:39.186588    4752 out.go:177] * Starting control plane node old-k8s-version-326000 in cluster old-k8s-version-326000
	I0701 12:55:39.190647    4752 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:55:39.190668    4752 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:55:39.190679    4752 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:39.190733    4752 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:39.190738    4752 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0701 12:55:39.190791    4752 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/old-k8s-version-326000/config.json ...
	I0701 12:55:39.191143    4752 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:39.191171    4752 start.go:369] acquired machines lock for "old-k8s-version-326000" in 22.625µs
	I0701 12:55:39.191181    4752 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:55:39.191186    4752 fix.go:54] fixHost starting: 
	I0701 12:55:39.191304    4752 fix.go:102] recreateIfNeeded on old-k8s-version-326000: state=Stopped err=<nil>
	W0701 12:55:39.191312    4752 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:55:39.195553    4752 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-326000" ...
	I0701 12:55:39.203653    4752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e0:cb:15:91:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:39.205598    4752 main.go:141] libmachine: STDOUT: 
	I0701 12:55:39.205764    4752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:39.205806    4752 fix.go:56] fixHost completed within 14.621084ms
	I0701 12:55:39.205812    4752 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 14.637125ms
	W0701 12:55:39.205821    4752 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:39.205875    4752 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:39.205880    4752 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:44.208088    4752 start.go:365] acquiring machines lock for old-k8s-version-326000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:44.208525    4752 start.go:369] acquired machines lock for "old-k8s-version-326000" in 319µs
	I0701 12:55:44.208659    4752 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:55:44.208680    4752 fix.go:54] fixHost starting: 
	I0701 12:55:44.209430    4752 fix.go:102] recreateIfNeeded on old-k8s-version-326000: state=Stopped err=<nil>
	W0701 12:55:44.209457    4752 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:55:44.218819    4752 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-326000" ...
	I0701 12:55:44.223217    4752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e0:cb:15:91:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/old-k8s-version-326000/disk.qcow2
	I0701 12:55:44.232624    4752 main.go:141] libmachine: STDOUT: 
	I0701 12:55:44.232672    4752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:44.232762    4752 fix.go:56] fixHost completed within 24.085333ms
	I0701 12:55:44.232779    4752 start.go:83] releasing machines lock for "old-k8s-version-326000", held for 24.234583ms
	W0701 12:55:44.232946    4752 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-326000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-326000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:44.241886    4752 out.go:177] 
	W0701 12:55:44.245001    4752 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:44.245024    4752 out.go:239] * 
	* 
	W0701 12:55:44.247835    4752 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:44.254859    4752 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-326000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (68.82625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-326000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (30.969125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-326000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.029417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-326000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-326000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (28.2635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-326000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-326000 "sudo crictl images -o json": exit status 89 (37.446583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-326000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-326000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-326000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (28.0475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-326000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-326000 --alsologtostderr -v=1: exit status 89 (39.434542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-326000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:44.515043    4771 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:44.515405    4771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.515408    4771 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:44.515411    4771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.515502    4771 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:44.515713    4771 out.go:303] Setting JSON to false
	I0701 12:55:44.515721    4771 mustload.go:65] Loading cluster: old-k8s-version-326000
	I0701 12:55:44.515898    4771 config.go:182] Loaded profile config "old-k8s-version-326000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0701 12:55:44.519646    4771 out.go:177] * The control plane node must be running for this command
	I0701 12:55:44.523825    4771 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-326000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-326000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (27.857333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (28.065084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-326000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.692134625s)

                                                
                                                
-- stdout --
	* [no-preload-146000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-146000 in cluster no-preload-146000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:44.969544    4794 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:44.969652    4794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.969655    4794 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:44.969657    4794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.969734    4794 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:44.970721    4794 out.go:303] Setting JSON to false
	I0701 12:55:44.985876    4794 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1514,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:44.985954    4794 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:44.989398    4794 out.go:177] * [no-preload-146000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:44.996399    4794 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:44.996454    4794 notify.go:220] Checking for updates...
	I0701 12:55:44.999271    4794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:45.002411    4794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:45.005364    4794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:45.008223    4794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:45.011335    4794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:45.014696    4794 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:45.014750    4794 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:45.018300    4794 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:45.025321    4794 start.go:297] selected driver: qemu2
	I0701 12:55:45.025327    4794 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:45.025333    4794 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:45.027152    4794 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:45.028384    4794 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:45.031362    4794 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:45.031377    4794 cni.go:84] Creating CNI manager for ""
	I0701 12:55:45.031383    4794 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:55:45.031387    4794 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:55:45.031392    4794 start_flags.go:319] config:
	{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0}
	I0701 12:55:45.035098    4794 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.043255    4794 out.go:177] * Starting control plane node no-preload-146000 in cluster no-preload-146000
	I0701 12:55:45.047348    4794 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:45.047410    4794 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/no-preload-146000/config.json ...
	I0701 12:55:45.047429    4794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/no-preload-146000/config.json: {Name:mk402f7743831aade9fa7eedb1a9e00ec60b4502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:45.047438    4794 cache.go:107] acquiring lock: {Name:mk76593e9ed3a2d4f2c32684ddb73d755dedc8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047440    4794 cache.go:107] acquiring lock: {Name:mka5c26eaa2b81f5038a91955b52f7bcab184364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047441    4794 cache.go:107] acquiring lock: {Name:mk71b444eadbf49d353c223d7a0ae7d698bf0b44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047458    4794 cache.go:107] acquiring lock: {Name:mk14d3d7b547e3b4c4ba784391a93240992e9c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047464    4794 cache.go:107] acquiring lock: {Name:mk57092efd79df0678ad338448547ffd24504c4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047462    4794 cache.go:107] acquiring lock: {Name:mk2e4a52f6893c5416b65bade7e87b9cdb1b0baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047475    4794 cache.go:107] acquiring lock: {Name:mk0ce7c575501e39714d3b76652a34f30980b5d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047653    4794 start.go:365] acquiring machines lock for no-preload-146000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:45.047642    4794 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:45.047650    4794 cache.go:107] acquiring lock: {Name:mkd3dc8b0beb70e309345b8c975aab5dd0efb8ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047605    4794 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0701 12:55:45.047686    4794 start.go:369] acquired machines lock for "no-preload-146000" in 25.834µs
	I0701 12:55:45.047655    4794 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0701 12:55:45.047942    4794 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0701 12:55:45.048062    4794 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0701 12:55:45.048079    4794 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0701 12:55:45.048124    4794 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 12:55:45.048130    4794 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 689.958µs
	I0701 12:55:45.048137    4794 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 12:55:45.047700    4794 start.go:93] Provisioning new machine with config: &{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:45.048157    4794 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:45.048180    4794 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0701 12:55:45.052263    4794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:45.060752    4794 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0701 12:55:45.061315    4794 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0701 12:55:45.061408    4794 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:45.062252    4794 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0701 12:55:45.062317    4794 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0701 12:55:45.064882    4794 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0701 12:55:45.064919    4794 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0701 12:55:45.067498    4794 start.go:159] libmachine.API.Create for "no-preload-146000" (driver="qemu2")
	I0701 12:55:45.067518    4794 client.go:168] LocalClient.Create starting
	I0701 12:55:45.067599    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:45.067622    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:45.067632    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:45.067676    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:45.067692    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:45.067700    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:45.068022    4794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:45.184005    4794 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:45.312091    4794 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:45.312107    4794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:45.312277    4794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.320861    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:45.320882    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:45.320939    4794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2 +20000M
	I0701 12:55:45.328849    4794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:45.328867    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:45.328885    4794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.328891    4794 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:45.328950    4794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:86:bc:4f:c7:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.330717    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:45.330734    4794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:45.330751    4794 client.go:171] LocalClient.Create took 263.2255ms
	I0701 12:55:46.247518    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0701 12:55:46.267276    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0701 12:55:46.327139    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0701 12:55:46.448151    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0701 12:55:46.510857    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0701 12:55:46.700397    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0701 12:55:46.840397    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0701 12:55:46.840457    4794 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.793023542s
	I0701 12:55:46.840491    4794 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0701 12:55:46.925827    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0701 12:55:47.331064    4794 start.go:128] duration metric: createHost completed in 2.28292s
	I0701 12:55:47.331103    4794 start.go:83] releasing machines lock for "no-preload-146000", held for 2.283449417s
	W0701 12:55:47.331169    4794 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:47.344165    4794 out.go:177] * Deleting "no-preload-146000" in qemu2 ...
	W0701 12:55:47.362576    4794 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:47.362618    4794 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:48.013644    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0701 12:55:48.013697    4794 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.966148125s
	I0701 12:55:48.013725    4794 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0701 12:55:49.899792    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0701 12:55:49.899863    4794 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 4.852483958s
	I0701 12:55:49.899889    4794 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0701 12:55:50.727698    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0701 12:55:50.727751    4794 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 5.680412541s
	I0701 12:55:50.727776    4794 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0701 12:55:50.773659    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0701 12:55:50.773695    4794 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 5.726340833s
	I0701 12:55:50.773725    4794 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0701 12:55:51.397722    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0701 12:55:51.397774    4794 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 6.350457917s
	I0701 12:55:51.397809    4794 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0701 12:55:52.363549    4794 start.go:365] acquiring machines lock for no-preload-146000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:52.363933    4794 start.go:369] acquired machines lock for "no-preload-146000" in 311.417µs
	I0701 12:55:52.364064    4794 start.go:93] Provisioning new machine with config: &{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:52.364351    4794 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:52.372833    4794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:52.419570    4794 start.go:159] libmachine.API.Create for "no-preload-146000" (driver="qemu2")
	I0701 12:55:52.419620    4794 client.go:168] LocalClient.Create starting
	I0701 12:55:52.419766    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:52.419831    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:52.419858    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:52.419968    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:52.420000    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:52.420019    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:52.420646    4794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:52.546404    4794 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:52.577373    4794 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:52.577379    4794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:52.577516    4794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:52.585988    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:52.586004    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:52.586115    4794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2 +20000M
	I0701 12:55:52.593390    4794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:52.593404    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:52.593416    4794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:52.593423    4794 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:52.593470    4794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:53:b8:3a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:52.594957    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:52.594970    4794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:52.594982    4794 client.go:171] LocalClient.Create took 175.360833ms
	I0701 12:55:54.595641    4794 start.go:128] duration metric: createHost completed in 2.231272s
	I0701 12:55:54.595700    4794 start.go:83] releasing machines lock for "no-preload-146000", held for 2.231788167s
	W0701 12:55:54.596047    4794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:54.604527    4794 out.go:177] 
	W0701 12:55:54.608388    4794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:54.608434    4794 out.go:239] * 
	* 
	W0701 12:55:54.611168    4794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:54.620439    4794 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (66.327667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe: permission denied (1.033625ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe: permission denied (5.574291ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe start -p stopped-upgrade-894000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe: permission denied (5.736292ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1529461468.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-894000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-894000: exit status 85 (140.366958ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo cat                              | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo cat                              | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo cat                              | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo cat                              | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo                                  | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo find                             | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p calico-674000 sudo crio                             | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p calico-674000                                       | calico-674000          | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	| start   | -p false-674000 --memory=3072                          | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                         |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo crictl                            | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo crictl ps                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --all                                                  |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo find                              | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo ip a s                            | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	| ssh     | -p false-674000 sudo ip r s                            | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	| ssh     | -p false-674000 sudo                                   | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo iptables                          | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | -t nat -L -n -v                                        |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | status kubelet --all --full                            |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cat kubelet --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo                                   | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | status docker --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cat docker --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo docker                            | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | status cri-docker --all --full                         |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cat cri-docker --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo                                   | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | status containerd --all --full                         |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cat containerd --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo cat                               | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo                                   | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | status crio --all --full                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo systemctl                         | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | cat crio --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo find                              | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-674000 sudo crio                              | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p false-674000                                        | false-674000           | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	| start   | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-326000        | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-326000             | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-326000 sudo                         | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	| delete  | -p old-k8s-version-326000                              | old-k8s-version-326000 | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT | 01 Jul 23 12:55 PDT |
	| start   | -p no-preload-146000                                   | no-preload-146000      | jenkins | v1.30.1 | 01 Jul 23 12:55 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                         |                        |         |         |                     |                     |
	|         |  --kubernetes-version=v1.27.3                          |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:55:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:55:44.969544    4794 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:44.969652    4794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.969655    4794 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:44.969657    4794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:44.969734    4794 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:44.970721    4794 out.go:303] Setting JSON to false
	I0701 12:55:44.985876    4794 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1514,"bootTime":1688239830,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:44.985954    4794 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:44.989398    4794 out.go:177] * [no-preload-146000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:44.996399    4794 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:44.996454    4794 notify.go:220] Checking for updates...
	I0701 12:55:44.999271    4794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:45.002411    4794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:45.005364    4794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:45.008223    4794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:45.011335    4794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:45.014696    4794 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:45.014750    4794 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:45.018300    4794 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:45.025321    4794 start.go:297] selected driver: qemu2
	I0701 12:55:45.025327    4794 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:45.025333    4794 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:45.027152    4794 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:45.028384    4794 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:45.031362    4794 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:45.031377    4794 cni.go:84] Creating CNI manager for ""
	I0701 12:55:45.031383    4794 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:55:45.031387    4794 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:55:45.031392    4794 start_flags.go:319] config:
	{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0}
	I0701 12:55:45.035098    4794 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.043255    4794 out.go:177] * Starting control plane node no-preload-146000 in cluster no-preload-146000
	I0701 12:55:45.047348    4794 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:45.047410    4794 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/no-preload-146000/config.json ...
	I0701 12:55:45.047429    4794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/no-preload-146000/config.json: {Name:mk402f7743831aade9fa7eedb1a9e00ec60b4502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:45.047438    4794 cache.go:107] acquiring lock: {Name:mk76593e9ed3a2d4f2c32684ddb73d755dedc8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047440    4794 cache.go:107] acquiring lock: {Name:mka5c26eaa2b81f5038a91955b52f7bcab184364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047441    4794 cache.go:107] acquiring lock: {Name:mk71b444eadbf49d353c223d7a0ae7d698bf0b44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047458    4794 cache.go:107] acquiring lock: {Name:mk14d3d7b547e3b4c4ba784391a93240992e9c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047464    4794 cache.go:107] acquiring lock: {Name:mk57092efd79df0678ad338448547ffd24504c4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047462    4794 cache.go:107] acquiring lock: {Name:mk2e4a52f6893c5416b65bade7e87b9cdb1b0baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047475    4794 cache.go:107] acquiring lock: {Name:mk0ce7c575501e39714d3b76652a34f30980b5d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047653    4794 start.go:365] acquiring machines lock for no-preload-146000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:45.047642    4794 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:45.047650    4794 cache.go:107] acquiring lock: {Name:mkd3dc8b0beb70e309345b8c975aab5dd0efb8ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:45.047605    4794 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0701 12:55:45.047686    4794 start.go:369] acquired machines lock for "no-preload-146000" in 25.834µs
	I0701 12:55:45.047655    4794 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0701 12:55:45.047942    4794 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0701 12:55:45.048062    4794 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0701 12:55:45.048079    4794 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0701 12:55:45.048124    4794 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 12:55:45.048130    4794 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 689.958µs
	I0701 12:55:45.048137    4794 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 12:55:45.047700    4794 start.go:93] Provisioning new machine with config: &{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:45.048157    4794 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:45.048180    4794 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0701 12:55:45.052263    4794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:45.060752    4794 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0701 12:55:45.061315    4794 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0701 12:55:45.061408    4794 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:45.062252    4794 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0701 12:55:45.062317    4794 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0701 12:55:45.064882    4794 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0701 12:55:45.064919    4794 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0701 12:55:45.067498    4794 start.go:159] libmachine.API.Create for "no-preload-146000" (driver="qemu2")
	I0701 12:55:45.067518    4794 client.go:168] LocalClient.Create starting
	I0701 12:55:45.067599    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:45.067622    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:45.067632    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:45.067676    4794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:45.067692    4794 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:45.067700    4794 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:45.068022    4794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:45.184005    4794 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:45.312091    4794 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:45.312107    4794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:45.312277    4794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.320861    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:45.320882    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:45.320939    4794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2 +20000M
	I0701 12:55:45.328849    4794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:45.328867    4794 main.go:141] libmachine: STDERR: 
	I0701 12:55:45.328885    4794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.328891    4794 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:45.328950    4794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:86:bc:4f:c7:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:45.330717    4794 main.go:141] libmachine: STDOUT: 
	I0701 12:55:45.330734    4794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:45.330751    4794 client.go:171] LocalClient.Create took 263.2255ms
	I0701 12:55:46.247518    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0701 12:55:46.267276    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0701 12:55:46.327139    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0701 12:55:46.448151    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0701 12:55:46.510857    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0701 12:55:46.700397    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0701 12:55:46.840397    4794 cache.go:157] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0701 12:55:46.840457    4794 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.793023542s
	I0701 12:55:46.840491    4794 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0701 12:55:46.925827    4794 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0701 12:55:47.331064    4794 start.go:128] duration metric: createHost completed in 2.28292s
	I0701 12:55:47.331103    4794 start.go:83] releasing machines lock for "no-preload-146000", held for 2.283449417s
	W0701 12:55:47.331169    4794 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:47.344165    4794 out.go:177] * Deleting "no-preload-146000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-894000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-894000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.72504575s)

                                                
                                                
-- stdout --
	* [embed-certs-808000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-808000 in cluster embed-certs-808000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-808000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:48.902639    4923 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:48.902777    4923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:48.902780    4923 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:48.902782    4923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:48.902843    4923 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:48.903922    4923 out.go:303] Setting JSON to false
	I0701 12:55:48.919610    4923 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1518,"bootTime":1688239830,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:48.919681    4923 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:48.923013    4923 out.go:177] * [embed-certs-808000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:48.932919    4923 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:48.929996    4923 notify.go:220] Checking for updates...
	I0701 12:55:48.940939    4923 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:48.948941    4923 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:48.956977    4923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:48.964892    4923 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:48.969964    4923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:48.974299    4923 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:48.974365    4923 config.go:182] Loaded profile config "no-preload-146000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:48.974415    4923 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:48.977974    4923 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:55:48.984875    4923 start.go:297] selected driver: qemu2
	I0701 12:55:48.984882    4923 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:55:48.984895    4923 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:48.986872    4923 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:55:48.990998    4923 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:55:48.994977    4923 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:48.994999    4923 cni.go:84] Creating CNI manager for ""
	I0701 12:55:48.995008    4923 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:55:48.995016    4923 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:55:48.995026    4923 start_flags.go:319] config:
	{Name:embed-certs-808000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-808000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0}
	I0701 12:55:48.999397    4923 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:49.006922    4923 out.go:177] * Starting control plane node embed-certs-808000 in cluster embed-certs-808000
	I0701 12:55:49.010965    4923 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:49.010991    4923 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:55:49.011003    4923 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:49.011077    4923 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:49.011082    4923 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:55:49.011157    4923 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/embed-certs-808000/config.json ...
	I0701 12:55:49.011170    4923 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/embed-certs-808000/config.json: {Name:mka411500e91375d735b2177c7e38d332464a925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:55:49.011358    4923 start.go:365] acquiring machines lock for embed-certs-808000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:49.011388    4923 start.go:369] acquired machines lock for "embed-certs-808000" in 24.375µs
	I0701 12:55:49.011400    4923 start.go:93] Provisioning new machine with config: &{Name:embed-certs-808000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-808000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:49.011431    4923 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:49.019902    4923 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:49.037098    4923 start.go:159] libmachine.API.Create for "embed-certs-808000" (driver="qemu2")
	I0701 12:55:49.037123    4923 client.go:168] LocalClient.Create starting
	I0701 12:55:49.037185    4923 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:49.037209    4923 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:49.037221    4923 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:49.037250    4923 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:49.037266    4923 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:49.037279    4923 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:49.037596    4923 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:49.152821    4923 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:49.191936    4923 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:49.191944    4923 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:49.192100    4923 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:49.200529    4923 main.go:141] libmachine: STDOUT: 
	I0701 12:55:49.200545    4923 main.go:141] libmachine: STDERR: 
	I0701 12:55:49.200588    4923 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2 +20000M
	I0701 12:55:49.207935    4923 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:49.207948    4923 main.go:141] libmachine: STDERR: 
	I0701 12:55:49.207964    4923 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:49.207971    4923 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:49.208011    4923 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:15:31:5a:10:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:49.209450    4923 main.go:141] libmachine: STDOUT: 
	I0701 12:55:49.209464    4923 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:49.209482    4923 client.go:171] LocalClient.Create took 172.357ms
	I0701 12:55:51.211612    4923 start.go:128] duration metric: createHost completed in 2.200200875s
	I0701 12:55:51.211707    4923 start.go:83] releasing machines lock for "embed-certs-808000", held for 2.200350125s
	W0701 12:55:51.211840    4923 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:51.219159    4923 out.go:177] * Deleting "embed-certs-808000" in qemu2 ...
	W0701 12:55:51.252103    4923 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:51.252145    4923 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:56.254294    4923 start.go:365] acquiring machines lock for embed-certs-808000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:56.254690    4923 start.go:369] acquired machines lock for "embed-certs-808000" in 310.875µs
	I0701 12:55:56.254813    4923 start.go:93] Provisioning new machine with config: &{Name:embed-certs-808000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-808000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:55:56.255124    4923 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:55:56.263722    4923 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:55:56.308639    4923 start.go:159] libmachine.API.Create for "embed-certs-808000" (driver="qemu2")
	I0701 12:55:56.308682    4923 client.go:168] LocalClient.Create starting
	I0701 12:55:56.308788    4923 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:55:56.308845    4923 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:56.308861    4923 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:56.308952    4923 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:55:56.308980    4923 main.go:141] libmachine: Decoding PEM data...
	I0701 12:55:56.308991    4923 main.go:141] libmachine: Parsing certificate...
	I0701 12:55:56.309579    4923 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:55:56.431503    4923 main.go:141] libmachine: Creating SSH key...
	I0701 12:55:56.542433    4923 main.go:141] libmachine: Creating Disk image...
	I0701 12:55:56.542439    4923 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:55:56.542578    4923 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:56.551306    4923 main.go:141] libmachine: STDOUT: 
	I0701 12:55:56.551322    4923 main.go:141] libmachine: STDERR: 
	I0701 12:55:56.551391    4923 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2 +20000M
	I0701 12:55:56.558514    4923 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:55:56.558530    4923 main.go:141] libmachine: STDERR: 
	I0701 12:55:56.558546    4923 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:56.558558    4923 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:55:56.558596    4923 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d6:5a:d3:ad:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:56.560151    4923 main.go:141] libmachine: STDOUT: 
	I0701 12:55:56.560164    4923 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:56.560198    4923 client.go:171] LocalClient.Create took 251.493584ms
	I0701 12:55:58.562353    4923 start.go:128] duration metric: createHost completed in 2.307234125s
	I0701 12:55:58.562456    4923 start.go:83] releasing machines lock for "embed-certs-808000", held for 2.307781959s
	W0701 12:55:58.562937    4923 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-808000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-808000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:58.572265    4923 out.go:177] 
	W0701 12:55:58.575463    4923 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:58.575488    4923 out.go:239] * 
	* 
	W0701 12:55:58.578270    4923 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:55:58.587357    4923 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (65.91675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-146000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-146000 create -f testdata/busybox.yaml: exit status 1 (28.851917ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-146000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (28.023666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.555916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-146000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-146000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-146000 describe deploy/metrics-server -n kube-system: exit status 1 (26.614ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-146000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-146000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.969083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.17693375s)

                                                
                                                
-- stdout --
	* [no-preload-146000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-146000 in cluster no-preload-146000
	* Restarting existing qemu2 VM for "no-preload-146000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-146000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:55.070435    4958 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:55.070547    4958 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:55.070549    4958 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:55.070551    4958 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:55.070613    4958 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:55.071529    4958 out.go:303] Setting JSON to false
	I0701 12:55:55.086490    4958 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1525,"bootTime":1688239830,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:55.086568    4958 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:55.091262    4958 out.go:177] * [no-preload-146000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:55.099139    4958 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:55.103282    4958 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:55.099182    4958 notify.go:220] Checking for updates...
	I0701 12:55:55.110209    4958 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:55.113238    4958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:55.116223    4958 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:55.117649    4958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:55.121539    4958 config.go:182] Loaded profile config "no-preload-146000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:55.121776    4958 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:55.125260    4958 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:55:55.130258    4958 start.go:297] selected driver: qemu2
	I0701 12:55:55.130263    4958 start.go:944] validating driver "qemu2" against &{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:55.130319    4958 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:55.132291    4958 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:55.132315    4958 cni.go:84] Creating CNI manager for ""
	I0701 12:55:55.132320    4958 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:55:55.132325    4958 start_flags.go:319] config:
	{Name:no-preload-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-146000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:55.136184    4958 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.140197    4958 out.go:177] * Starting control plane node no-preload-146000 in cluster no-preload-146000
	I0701 12:55:55.148142    4958 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:55.148199    4958 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/no-preload-146000/config.json ...
	I0701 12:55:55.148209    4958 cache.go:107] acquiring lock: {Name:mk71b444eadbf49d353c223d7a0ae7d698bf0b44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148217    4958 cache.go:107] acquiring lock: {Name:mk14d3d7b547e3b4c4ba784391a93240992e9c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148232    4958 cache.go:107] acquiring lock: {Name:mk76593e9ed3a2d4f2c32684ddb73d755dedc8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148271    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0701 12:55:55.148276    4958 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.375µs
	I0701 12:55:55.148283    4958 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0701 12:55:55.148278    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0701 12:55:55.148288    4958 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 80.25µs
	I0701 12:55:55.148286    4958 cache.go:107] acquiring lock: {Name:mk2e4a52f6893c5416b65bade7e87b9cdb1b0baf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148292    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0701 12:55:55.148298    4958 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 89.75µs
	I0701 12:55:55.148304    4958 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0701 12:55:55.148297    4958 cache.go:107] acquiring lock: {Name:mkd3dc8b0beb70e309345b8c975aab5dd0efb8ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148306    4958 cache.go:107] acquiring lock: {Name:mk57092efd79df0678ad338448547ffd24504c4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148292    4958 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0701 12:55:55.148352    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0701 12:55:55.148355    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0701 12:55:55.148356    4958 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 50.917µs
	I0701 12:55:55.148361    4958 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0701 12:55:55.148364    4958 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 62.334µs
	I0701 12:55:55.148371    4958 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0701 12:55:55.148334    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0701 12:55:55.148414    4958 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 123.667µs
	I0701 12:55:55.148429    4958 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0701 12:55:55.148383    4958 cache.go:107] acquiring lock: {Name:mka5c26eaa2b81f5038a91955b52f7bcab184364 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148428    4958 cache.go:107] acquiring lock: {Name:mk0ce7c575501e39714d3b76652a34f30980b5d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:55.148467    4958 cache.go:115] /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0701 12:55:55.148476    4958 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 113.959µs
	I0701 12:55:55.148482    4958 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0701 12:55:55.148492    4958 start.go:365] acquiring machines lock for no-preload-146000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:55.148493    4958 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:55.148520    4958 start.go:369] acquired machines lock for "no-preload-146000" in 21.417µs
	I0701 12:55:55.148530    4958 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:55:55.148536    4958 fix.go:54] fixHost starting: 
	I0701 12:55:55.148657    4958 fix.go:102] recreateIfNeeded on no-preload-146000: state=Stopped err=<nil>
	W0701 12:55:55.148663    4958 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:55:55.157237    4958 out.go:177] * Restarting existing qemu2 VM for "no-preload-146000" ...
	I0701 12:55:55.160270    4958 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:53:b8:3a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:55:55.161217    4958 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0701 12:55:55.162474    4958 main.go:141] libmachine: STDOUT: 
	I0701 12:55:55.162498    4958 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:55.162531    4958 fix.go:56] fixHost completed within 13.995375ms
	I0701 12:55:55.162535    4958 start.go:83] releasing machines lock for "no-preload-146000", held for 14.011875ms
	W0701 12:55:55.162544    4958 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:55.162585    4958 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:55.162589    4958 start.go:687] Will try again in 5 seconds ...
	I0701 12:55:56.169641    4958 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0701 12:56:00.163101    4958 start.go:365] acquiring machines lock for no-preload-146000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:00.163440    4958 start.go:369] acquired machines lock for "no-preload-146000" in 269.083µs
	I0701 12:56:00.163567    4958 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:00.163585    4958 fix.go:54] fixHost starting: 
	I0701 12:56:00.164253    4958 fix.go:102] recreateIfNeeded on no-preload-146000: state=Stopped err=<nil>
	W0701 12:56:00.164279    4958 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:00.173610    4958 out.go:177] * Restarting existing qemu2 VM for "no-preload-146000" ...
	I0701 12:56:00.176818    4958 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:53:b8:3a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/no-preload-146000/disk.qcow2
	I0701 12:56:00.185723    4958 main.go:141] libmachine: STDOUT: 
	I0701 12:56:00.185775    4958 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:00.185836    4958 fix.go:56] fixHost completed within 22.254ms
	I0701 12:56:00.185853    4958 start.go:83] releasing machines lock for "no-preload-146000", held for 22.395417ms
	W0701 12:56:00.186054    4958 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-146000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-146000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:00.193725    4958 out.go:177] 
	W0701 12:56:00.197714    4958 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:00.197758    4958 out.go:239] * 
	* 
	W0701 12:56:00.200575    4958 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:00.208630    4958 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (62.70575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-808000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-808000 create -f testdata/busybox.yaml: exit status 1 (29.787167ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-808000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.46875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.882583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-808000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-808000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-808000 describe deploy/metrics-server -n kube-system: exit status 1 (25.359375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-808000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-808000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.656541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.194617333s)

                                                
                                                
-- stdout --
	* [embed-certs-808000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-808000 in cluster embed-certs-808000
	* Restarting existing qemu2 VM for "embed-certs-808000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-808000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:55:59.029541    5003 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:55:59.029657    5003 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:59.029660    5003 out.go:309] Setting ErrFile to fd 2...
	I0701 12:55:59.029662    5003 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:55:59.029733    5003 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:55:59.030662    5003 out.go:303] Setting JSON to false
	I0701 12:55:59.045697    5003 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1529,"bootTime":1688239830,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:55:59.045768    5003 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:55:59.050732    5003 out.go:177] * [embed-certs-808000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:55:59.060616    5003 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:55:59.057682    5003 notify.go:220] Checking for updates...
	I0701 12:55:59.068579    5003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:55:59.075564    5003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:55:59.083589    5003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:55:59.087538    5003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:55:59.090633    5003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:55:59.094688    5003 config.go:182] Loaded profile config "embed-certs-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:55:59.094927    5003 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:55:59.099620    5003 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:55:59.105609    5003 start.go:297] selected driver: qemu2
	I0701 12:55:59.105615    5003 start.go:944] validating driver "qemu2" against &{Name:embed-certs-808000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-808000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:59.105674    5003 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:55:59.107720    5003 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:55:59.107751    5003 cni.go:84] Creating CNI manager for ""
	I0701 12:55:59.107757    5003 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:55:59.107762    5003 start_flags.go:319] config:
	{Name:embed-certs-808000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-808000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:55:59.111827    5003 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:55:59.119593    5003 out.go:177] * Starting control plane node embed-certs-808000 in cluster embed-certs-808000
	I0701 12:55:59.123548    5003 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:55:59.123570    5003 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:55:59.123586    5003 cache.go:57] Caching tarball of preloaded images
	I0701 12:55:59.123639    5003 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:55:59.123644    5003 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:55:59.123696    5003 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/embed-certs-808000/config.json ...
	I0701 12:55:59.123944    5003 start.go:365] acquiring machines lock for embed-certs-808000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:55:59.123970    5003 start.go:369] acquired machines lock for "embed-certs-808000" in 19.917µs
	I0701 12:55:59.123981    5003 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:55:59.123986    5003 fix.go:54] fixHost starting: 
	I0701 12:55:59.124106    5003 fix.go:102] recreateIfNeeded on embed-certs-808000: state=Stopped err=<nil>
	W0701 12:55:59.124114    5003 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:55:59.131669    5003 out.go:177] * Restarting existing qemu2 VM for "embed-certs-808000" ...
	I0701 12:55:59.135678    5003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d6:5a:d3:ad:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:55:59.137524    5003 main.go:141] libmachine: STDOUT: 
	I0701 12:55:59.137542    5003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:55:59.137570    5003 fix.go:56] fixHost completed within 13.584209ms
	I0701 12:55:59.137575    5003 start.go:83] releasing machines lock for "embed-certs-808000", held for 13.6005ms
	W0701 12:55:59.137583    5003 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:55:59.137616    5003 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:55:59.137620    5003 start.go:687] Will try again in 5 seconds ...
	I0701 12:56:04.139702    5003 start.go:365] acquiring machines lock for embed-certs-808000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:04.140162    5003 start.go:369] acquired machines lock for "embed-certs-808000" in 349.792µs
	I0701 12:56:04.140296    5003 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:04.140321    5003 fix.go:54] fixHost starting: 
	I0701 12:56:04.141188    5003 fix.go:102] recreateIfNeeded on embed-certs-808000: state=Stopped err=<nil>
	W0701 12:56:04.141214    5003 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:04.151609    5003 out.go:177] * Restarting existing qemu2 VM for "embed-certs-808000" ...
	I0701 12:56:04.154808    5003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d6:5a:d3:ad:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/embed-certs-808000/disk.qcow2
	I0701 12:56:04.163207    5003 main.go:141] libmachine: STDOUT: 
	I0701 12:56:04.163269    5003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:04.163404    5003 fix.go:56] fixHost completed within 23.086708ms
	I0701 12:56:04.163429    5003 start.go:83] releasing machines lock for "embed-certs-808000", held for 23.225084ms
	W0701 12:56:04.163644    5003 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-808000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-808000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:04.172517    5003 out.go:177] 
	W0701 12:56:04.175602    5003 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:04.175642    5003 out.go:239] * 
	* 
	W0701 12:56:04.178106    5003 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:04.186472    5003 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-808000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (67.328167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-146000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (30.556083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-146000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-146000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-146000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.004875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-146000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-146000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.46825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-146000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-146000 "sudo crictl images -o json": exit status 89 (37.532459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-146000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-146000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-146000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.1965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-146000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-146000 --alsologtostderr -v=1: exit status 89 (38.168291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-146000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:00.461823    5022 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:00.461974    5022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:00.461977    5022 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:00.461979    5022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:00.462045    5022 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:00.462260    5022 out.go:303] Setting JSON to false
	I0701 12:56:00.462269    5022 mustload.go:65] Loading cluster: no-preload-146000
	I0701 12:56:00.462426    5022 config.go:182] Loaded profile config "no-preload-146000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:00.466853    5022 out.go:177] * The control plane node must be running for this command
	I0701 12:56:00.470952    5022 out.go:177]   To start a cluster, run: "minikube start -p no-preload-146000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-146000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.608125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (27.508083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-146000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.78291375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-457000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-457000 in cluster default-k8s-diff-port-457000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-457000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:01.148149    5057 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:01.148271    5057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:01.148274    5057 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:01.148277    5057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:01.148347    5057 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:01.149325    5057 out.go:303] Setting JSON to false
	I0701 12:56:01.164737    5057 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1531,"bootTime":1688239830,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:56:01.164807    5057 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:56:01.169537    5057 out.go:177] * [default-k8s-diff-port-457000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:56:01.179504    5057 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:56:01.175567    5057 notify.go:220] Checking for updates...
	I0701 12:56:01.186456    5057 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:56:01.189539    5057 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:56:01.192530    5057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:56:01.195557    5057 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:56:01.198532    5057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:56:01.199994    5057 config.go:182] Loaded profile config "embed-certs-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:01.200058    5057 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:01.200102    5057 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:56:01.204485    5057 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:56:01.211338    5057 start.go:297] selected driver: qemu2
	I0701 12:56:01.211343    5057 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:56:01.211350    5057 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:56:01.213472    5057 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:56:01.216507    5057 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:56:01.219635    5057 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:56:01.219662    5057 cni.go:84] Creating CNI manager for ""
	I0701 12:56:01.219669    5057 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:56:01.219674    5057 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:56:01.219683    5057 start_flags.go:319] config:
	{Name:default-k8s-diff-port-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:01.224370    5057 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:56:01.232530    5057 out.go:177] * Starting control plane node default-k8s-diff-port-457000 in cluster default-k8s-diff-port-457000
	I0701 12:56:01.236552    5057 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:56:01.236600    5057 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:56:01.236613    5057 cache.go:57] Caching tarball of preloaded images
	I0701 12:56:01.236688    5057 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:56:01.236694    5057 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:56:01.236760    5057 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/default-k8s-diff-port-457000/config.json ...
	I0701 12:56:01.236787    5057 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/default-k8s-diff-port-457000/config.json: {Name:mk3f2e20088dcb030edbe2dc0f26c2dcd83fdc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:56:01.237006    5057 start.go:365] acquiring machines lock for default-k8s-diff-port-457000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:01.237039    5057 start.go:369] acquired machines lock for "default-k8s-diff-port-457000" in 25.667µs
	I0701 12:56:01.237053    5057 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:56:01.237086    5057 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:56:01.241525    5057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:56:01.257878    5057 start.go:159] libmachine.API.Create for "default-k8s-diff-port-457000" (driver="qemu2")
	I0701 12:56:01.257904    5057 client.go:168] LocalClient.Create starting
	I0701 12:56:01.257961    5057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:56:01.257986    5057 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:01.257996    5057 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:01.258039    5057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:56:01.258054    5057 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:01.258064    5057 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:01.258418    5057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:56:01.369450    5057 main.go:141] libmachine: Creating SSH key...
	I0701 12:56:01.479154    5057 main.go:141] libmachine: Creating Disk image...
	I0701 12:56:01.479161    5057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:56:01.479312    5057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:01.487916    5057 main.go:141] libmachine: STDOUT: 
	I0701 12:56:01.487930    5057 main.go:141] libmachine: STDERR: 
	I0701 12:56:01.487990    5057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2 +20000M
	I0701 12:56:01.495036    5057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:56:01.495061    5057 main.go:141] libmachine: STDERR: 
	I0701 12:56:01.495083    5057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:01.495090    5057 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:56:01.495138    5057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:e9:54:e0:99:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:01.496671    5057 main.go:141] libmachine: STDOUT: 
	I0701 12:56:01.496684    5057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:01.496701    5057 client.go:171] LocalClient.Create took 238.797583ms
	I0701 12:56:03.498864    5057 start.go:128] duration metric: createHost completed in 2.26175825s
	I0701 12:56:03.498917    5057 start.go:83] releasing machines lock for "default-k8s-diff-port-457000", held for 2.261908209s
	W0701 12:56:03.498978    5057 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:03.509146    5057 out.go:177] * Deleting "default-k8s-diff-port-457000" in qemu2 ...
	W0701 12:56:03.530264    5057 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:03.530317    5057 start.go:687] Will try again in 5 seconds ...
	I0701 12:56:08.532483    5057 start.go:365] acquiring machines lock for default-k8s-diff-port-457000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:08.533023    5057 start.go:369] acquired machines lock for "default-k8s-diff-port-457000" in 429.25µs
	I0701 12:56:08.533153    5057 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:56:08.533493    5057 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:56:08.544209    5057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:56:08.590592    5057 start.go:159] libmachine.API.Create for "default-k8s-diff-port-457000" (driver="qemu2")
	I0701 12:56:08.590640    5057 client.go:168] LocalClient.Create starting
	I0701 12:56:08.590801    5057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:56:08.590845    5057 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:08.590870    5057 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:08.590965    5057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:56:08.590997    5057 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:08.591015    5057 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:08.591669    5057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:56:08.714539    5057 main.go:141] libmachine: Creating SSH key...
	I0701 12:56:08.848729    5057 main.go:141] libmachine: Creating Disk image...
	I0701 12:56:08.848736    5057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:56:08.848902    5057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:08.857786    5057 main.go:141] libmachine: STDOUT: 
	I0701 12:56:08.857798    5057 main.go:141] libmachine: STDERR: 
	I0701 12:56:08.857862    5057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2 +20000M
	I0701 12:56:08.865007    5057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:56:08.865020    5057 main.go:141] libmachine: STDERR: 
	I0701 12:56:08.865035    5057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:08.865041    5057 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:56:08.865081    5057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:4c:f2:bc:58:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:08.866601    5057 main.go:141] libmachine: STDOUT: 
	I0701 12:56:08.866613    5057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:08.866626    5057 client.go:171] LocalClient.Create took 275.985709ms
	I0701 12:56:10.868745    5057 start.go:128] duration metric: createHost completed in 2.335272375s
	I0701 12:56:10.868817    5057 start.go:83] releasing machines lock for "default-k8s-diff-port-457000", held for 2.335810833s
	W0701 12:56:10.869211    5057 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:10.876912    5057 out.go:177] 
	W0701 12:56:10.881117    5057 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:10.881178    5057 out.go:239] * 
	* 
	W0701 12:56:10.883825    5057 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:10.891742    5057 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (67.250125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-808000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (30.9475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-808000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-808000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-808000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.343416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-808000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-808000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.327125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-808000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-808000 "sudo crictl images -o json": exit status 89 (37.728167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-808000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-808000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-808000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.702416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-808000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-808000 --alsologtostderr -v=1: exit status 89 (39.999125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-808000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:04.444930    5081 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:04.445074    5081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:04.445077    5081 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:04.445079    5081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:04.445148    5081 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:04.445352    5081 out.go:303] Setting JSON to false
	I0701 12:56:04.445361    5081 mustload.go:65] Loading cluster: embed-certs-808000
	I0701 12:56:04.445528    5081 config.go:182] Loaded profile config "embed-certs-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:04.449965    5081 out.go:177] * The control plane node must be running for this command
	I0701 12:56:04.454051    5081 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-808000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-808000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.074042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.956916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-808000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.706674s)

                                                
                                                
-- stdout --
	* [newest-cni-581000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-581000 in cluster newest-cni-581000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:04.897495    5104 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:04.897598    5104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:04.897602    5104 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:04.897605    5104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:04.897672    5104 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:04.898666    5104 out.go:303] Setting JSON to false
	I0701 12:56:04.913799    5104 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1534,"bootTime":1688239830,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:56:04.913866    5104 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:56:04.918714    5104 out.go:177] * [newest-cni-581000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:56:04.925636    5104 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:56:04.925678    5104 notify.go:220] Checking for updates...
	I0701 12:56:04.929539    5104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:56:04.932645    5104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:56:04.935680    5104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:56:04.938703    5104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:56:04.941656    5104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:56:04.945033    5104 config.go:182] Loaded profile config "default-k8s-diff-port-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:04.945102    5104 config.go:182] Loaded profile config "multinode-757000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:04.945150    5104 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:56:04.949648    5104 out.go:177] * Using the qemu2 driver based on user configuration
	I0701 12:56:04.956691    5104 start.go:297] selected driver: qemu2
	I0701 12:56:04.956698    5104 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:56:04.956705    5104 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:56:04.958662    5104 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0701 12:56:04.958681    5104 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0701 12:56:04.962691    5104 out.go:177] * Automatically selected the socket_vmnet network
	I0701 12:56:04.969670    5104 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 12:56:04.969684    5104 cni.go:84] Creating CNI manager for ""
	I0701 12:56:04.969692    5104 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:56:04.969699    5104 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0701 12:56:04.969705    5104 start_flags.go:319] config:
	{Name:newest-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:04.974124    5104 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:56:04.981606    5104 out.go:177] * Starting control plane node newest-cni-581000 in cluster newest-cni-581000
	I0701 12:56:04.985682    5104 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:56:04.985708    5104 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:56:04.985718    5104 cache.go:57] Caching tarball of preloaded images
	I0701 12:56:04.985797    5104 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:56:04.985810    5104 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:56:04.985893    5104 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/newest-cni-581000/config.json ...
	I0701 12:56:04.985908    5104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/newest-cni-581000/config.json: {Name:mk7b864b2d7d17b54674faab6b7b86745754a853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:56:04.986122    5104 start.go:365] acquiring machines lock for newest-cni-581000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:04.986153    5104 start.go:369] acquired machines lock for "newest-cni-581000" in 25.375µs
	I0701 12:56:04.986165    5104 start.go:93] Provisioning new machine with config: &{Name:newest-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:56:04.986195    5104 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:56:04.994649    5104 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:56:05.010942    5104 start.go:159] libmachine.API.Create for "newest-cni-581000" (driver="qemu2")
	I0701 12:56:05.010960    5104 client.go:168] LocalClient.Create starting
	I0701 12:56:05.011021    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:56:05.011043    5104 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:05.011053    5104 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:05.011080    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:56:05.011095    5104 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:05.011101    5104 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:05.011408    5104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:56:05.122119    5104 main.go:141] libmachine: Creating SSH key...
	I0701 12:56:05.193780    5104 main.go:141] libmachine: Creating Disk image...
	I0701 12:56:05.193787    5104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:56:05.193930    5104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:05.202317    5104 main.go:141] libmachine: STDOUT: 
	I0701 12:56:05.202333    5104 main.go:141] libmachine: STDERR: 
	I0701 12:56:05.202395    5104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2 +20000M
	I0701 12:56:05.209531    5104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:56:05.209544    5104 main.go:141] libmachine: STDERR: 
	I0701 12:56:05.209562    5104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:05.209567    5104 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:56:05.209610    5104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:28:4a:0a:80:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:05.211107    5104 main.go:141] libmachine: STDOUT: 
	I0701 12:56:05.211120    5104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:05.211134    5104 client.go:171] LocalClient.Create took 200.173792ms
	I0701 12:56:07.213262    5104 start.go:128] duration metric: createHost completed in 2.227082083s
	I0701 12:56:07.213319    5104 start.go:83] releasing machines lock for "newest-cni-581000", held for 2.227198875s
	W0701 12:56:07.213380    5104 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:07.224460    5104 out.go:177] * Deleting "newest-cni-581000" in qemu2 ...
	W0701 12:56:07.244111    5104 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:07.244133    5104 start.go:687] Will try again in 5 seconds ...
	I0701 12:56:12.244782    5104 start.go:365] acquiring machines lock for newest-cni-581000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:12.245192    5104 start.go:369] acquired machines lock for "newest-cni-581000" in 294.125µs
	I0701 12:56:12.245310    5104 start.go:93] Provisioning new machine with config: &{Name:newest-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0701 12:56:12.245650    5104 start.go:125] createHost starting for "" (driver="qemu2")
	I0701 12:56:12.254935    5104 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0701 12:56:12.301275    5104 start.go:159] libmachine.API.Create for "newest-cni-581000" (driver="qemu2")
	I0701 12:56:12.301317    5104 client.go:168] LocalClient.Create starting
	I0701 12:56:12.301422    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/ca.pem
	I0701 12:56:12.301474    5104 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:12.301508    5104 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:12.301608    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1041/.minikube/certs/cert.pem
	I0701 12:56:12.301641    5104 main.go:141] libmachine: Decoding PEM data...
	I0701 12:56:12.301664    5104 main.go:141] libmachine: Parsing certificate...
	I0701 12:56:12.302276    5104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso...
	I0701 12:56:12.426545    5104 main.go:141] libmachine: Creating SSH key...
	I0701 12:56:12.519978    5104 main.go:141] libmachine: Creating Disk image...
	I0701 12:56:12.519987    5104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0701 12:56:12.520127    5104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:12.528615    5104 main.go:141] libmachine: STDOUT: 
	I0701 12:56:12.528632    5104 main.go:141] libmachine: STDERR: 
	I0701 12:56:12.528689    5104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2 +20000M
	I0701 12:56:12.535866    5104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0701 12:56:12.535878    5104 main.go:141] libmachine: STDERR: 
	I0701 12:56:12.535899    5104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:12.535907    5104 main.go:141] libmachine: Starting QEMU VM...
	I0701 12:56:12.535949    5104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:97:3a:6d:9f:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:12.537428    5104 main.go:141] libmachine: STDOUT: 
	I0701 12:56:12.537441    5104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:12.537454    5104 client.go:171] LocalClient.Create took 236.136292ms
	I0701 12:56:14.539573    5104 start.go:128] duration metric: createHost completed in 2.29393825s
	I0701 12:56:14.539641    5104 start.go:83] releasing machines lock for "newest-cni-581000", held for 2.294471916s
	W0701 12:56:14.540070    5104 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:14.547740    5104 out.go:177] 
	W0701 12:56:14.552762    5104 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:14.552804    5104 out.go:239] * 
	* 
	W0701 12:56:14.555356    5104 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:14.564650    5104 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (66.532834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-457000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-457000 create -f testdata/busybox.yaml: exit status 1 (29.592042ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-457000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.801458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.416917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-457000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-457000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-457000 describe deploy/metrics-server -n kube-system: exit status 1 (25.501084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-457000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-457000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.759333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.174002834s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-457000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-457000 in cluster default-k8s-diff-port-457000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:11.343052    5136 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:11.343155    5136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:11.343158    5136 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:11.343161    5136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:11.343237    5136 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:11.344204    5136 out.go:303] Setting JSON to false
	I0701 12:56:11.359263    5136 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1541,"bootTime":1688239830,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:56:11.359332    5136 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:56:11.364185    5136 out.go:177] * [default-k8s-diff-port-457000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:56:11.371235    5136 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:56:11.375110    5136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:56:11.371324    5136 notify.go:220] Checking for updates...
	I0701 12:56:11.382169    5136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:56:11.385154    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:56:11.388184    5136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:56:11.391223    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:56:11.394506    5136 config.go:182] Loaded profile config "default-k8s-diff-port-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:11.394749    5136 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:56:11.399156    5136 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:56:11.406161    5136 start.go:297] selected driver: qemu2
	I0701 12:56:11.406166    5136 start.go:944] validating driver "qemu2" against &{Name:default-k8s-diff-port-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:11.406222    5136 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:56:11.408290    5136 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 12:56:11.408313    5136 cni.go:84] Creating CNI manager for ""
	I0701 12:56:11.408319    5136 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:56:11.408326    5136 start_flags.go:319] config:
	{Name:default-k8s-diff-port-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-4570
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:11.412173    5136 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:56:11.419004    5136 out.go:177] * Starting control plane node default-k8s-diff-port-457000 in cluster default-k8s-diff-port-457000
	I0701 12:56:11.423157    5136 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:56:11.423196    5136 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:56:11.423212    5136 cache.go:57] Caching tarball of preloaded images
	I0701 12:56:11.423287    5136 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:56:11.423292    5136 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:56:11.423346    5136 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/default-k8s-diff-port-457000/config.json ...
	I0701 12:56:11.423629    5136 start.go:365] acquiring machines lock for default-k8s-diff-port-457000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:11.423653    5136 start.go:369] acquired machines lock for "default-k8s-diff-port-457000" in 18.333µs
	I0701 12:56:11.423663    5136 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:11.423668    5136 fix.go:54] fixHost starting: 
	I0701 12:56:11.423781    5136 fix.go:102] recreateIfNeeded on default-k8s-diff-port-457000: state=Stopped err=<nil>
	W0701 12:56:11.423789    5136 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:11.431119    5136 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-457000" ...
	I0701 12:56:11.435254    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:4c:f2:bc:58:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:11.437173    5136 main.go:141] libmachine: STDOUT: 
	I0701 12:56:11.437192    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:11.437225    5136 fix.go:56] fixHost completed within 13.557334ms
	I0701 12:56:11.437229    5136 start.go:83] releasing machines lock for "default-k8s-diff-port-457000", held for 13.572417ms
	W0701 12:56:11.437237    5136 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:11.437268    5136 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:11.437272    5136 start.go:687] Will try again in 5 seconds ...
	I0701 12:56:16.439287    5136 start.go:365] acquiring machines lock for default-k8s-diff-port-457000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:16.439635    5136 start.go:369] acquired machines lock for "default-k8s-diff-port-457000" in 263µs
	I0701 12:56:16.439776    5136 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:16.439796    5136 fix.go:54] fixHost starting: 
	I0701 12:56:16.440540    5136 fix.go:102] recreateIfNeeded on default-k8s-diff-port-457000: state=Stopped err=<nil>
	W0701 12:56:16.440565    5136 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:16.444934    5136 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-457000" ...
	I0701 12:56:16.449129    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:4c:f2:bc:58:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/default-k8s-diff-port-457000/disk.qcow2
	I0701 12:56:16.457530    5136 main.go:141] libmachine: STDOUT: 
	I0701 12:56:16.457586    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:16.457667    5136 fix.go:56] fixHost completed within 17.873875ms
	I0701 12:56:16.457685    5136 start.go:83] releasing machines lock for "default-k8s-diff-port-457000", held for 18.027541ms
	W0701 12:56:16.457905    5136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:16.464902    5136 out.go:177] 
	W0701 12:56:16.468998    5136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:16.469042    5136 out.go:239] * 
	* 
	W0701 12:56:16.471595    5136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:16.477884    5136 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-457000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (67.191208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.183156s)

                                                
                                                
-- stdout --
	* [newest-cni-581000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-581000 in cluster newest-cni-581000
	* Restarting existing qemu2 VM for "newest-cni-581000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-581000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:14.878460    5157 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:14.878568    5157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:14.878571    5157 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:14.878573    5157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:14.878640    5157 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:14.879556    5157 out.go:303] Setting JSON to false
	I0701 12:56:14.894362    5157 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1544,"bootTime":1688239830,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:56:14.894437    5157 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:56:14.899114    5157 out.go:177] * [newest-cni-581000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:56:14.902011    5157 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:56:14.906075    5157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:56:14.902066    5157 notify.go:220] Checking for updates...
	I0701 12:56:14.912973    5157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:56:14.916068    5157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:56:14.919086    5157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:56:14.922065    5157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:56:14.925367    5157 config.go:182] Loaded profile config "newest-cni-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:14.925640    5157 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:56:14.930118    5157 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:56:14.936984    5157 start.go:297] selected driver: qemu2
	I0701 12:56:14.936989    5157 start.go:944] validating driver "qemu2" against &{Name:newest-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:14.937040    5157 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:56:14.938946    5157 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0701 12:56:14.938970    5157 cni.go:84] Creating CNI manager for ""
	I0701 12:56:14.938975    5157 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:56:14.938979    5157 start_flags.go:319] config:
	{Name:newest-cni-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-581000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:56:14.942787    5157 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:56:14.951139    5157 out.go:177] * Starting control plane node newest-cni-581000 in cluster newest-cni-581000
	I0701 12:56:14.954987    5157 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:56:14.955005    5157 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:56:14.955017    5157 cache.go:57] Caching tarball of preloaded images
	I0701 12:56:14.955070    5157 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0701 12:56:14.955075    5157 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:56:14.955130    5157 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/newest-cni-581000/config.json ...
	I0701 12:56:14.955423    5157 start.go:365] acquiring machines lock for newest-cni-581000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:14.955447    5157 start.go:369] acquired machines lock for "newest-cni-581000" in 18.791µs
	I0701 12:56:14.955457    5157 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:14.955462    5157 fix.go:54] fixHost starting: 
	I0701 12:56:14.955575    5157 fix.go:102] recreateIfNeeded on newest-cni-581000: state=Stopped err=<nil>
	W0701 12:56:14.955584    5157 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:14.960020    5157 out.go:177] * Restarting existing qemu2 VM for "newest-cni-581000" ...
	I0701 12:56:14.967970    5157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:97:3a:6d:9f:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:14.969761    5157 main.go:141] libmachine: STDOUT: 
	I0701 12:56:14.969773    5157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:14.969800    5157 fix.go:56] fixHost completed within 14.338459ms
	I0701 12:56:14.969804    5157 start.go:83] releasing machines lock for "newest-cni-581000", held for 14.354125ms
	W0701 12:56:14.969812    5157 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:14.969842    5157 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:14.969846    5157 start.go:687] Will try again in 5 seconds ...
	I0701 12:56:19.971906    5157 start.go:365] acquiring machines lock for newest-cni-581000: {Name:mk42d9b441ee19b6b04eb25c7fe69e99dac985a8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0701 12:56:19.972331    5157 start.go:369] acquired machines lock for "newest-cni-581000" in 345.583µs
	I0701 12:56:19.972508    5157 start.go:96] Skipping create...Using existing machine configuration
	I0701 12:56:19.972532    5157 fix.go:54] fixHost starting: 
	I0701 12:56:19.973333    5157 fix.go:102] recreateIfNeeded on newest-cni-581000: state=Stopped err=<nil>
	W0701 12:56:19.973361    5157 fix.go:128] unexpected machine state, will restart: <nil>
	I0701 12:56:19.982712    5157 out.go:177] * Restarting existing qemu2 VM for "newest-cni-581000" ...
	I0701 12:56:19.986931    5157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:97:3a:6d:9f:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/newest-cni-581000/disk.qcow2
	I0701 12:56:19.995914    5157 main.go:141] libmachine: STDOUT: 
	I0701 12:56:19.995959    5157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0701 12:56:19.996037    5157 fix.go:56] fixHost completed within 23.50675ms
	I0701 12:56:19.996054    5157 start.go:83] releasing machines lock for "newest-cni-581000", held for 23.694667ms
	W0701 12:56:19.996239    5157 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-581000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-581000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0701 12:56:20.003743    5157 out.go:177] 
	W0701 12:56:20.007794    5157 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0701 12:56:20.007813    5157 out.go:239] * 
	* 
	W0701 12:56:20.009702    5157 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:56:20.022583    5157 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-581000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (67.683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-457000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (30.1855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-457000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (24.953625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-457000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (28.121042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-457000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-457000 "sudo crictl images -o json": exit status 89 (39.290375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-457000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-457000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-457000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.147208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-457000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-457000 --alsologtostderr -v=1: exit status 89 (38.433375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-457000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:16.737170    5176 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:16.737303    5176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:16.737306    5176 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:16.737309    5176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:16.737381    5176 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:16.737586    5176 out.go:303] Setting JSON to false
	I0701 12:56:16.737595    5176 mustload.go:65] Loading cluster: default-k8s-diff-port-457000
	I0701 12:56:16.737784    5176 config.go:182] Loaded profile config "default-k8s-diff-port-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:16.741166    5176 out.go:177] * The control plane node must be running for this command
	I0701 12:56:16.745011    5176 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-457000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-457000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.435583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.328083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-581000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-581000 "sudo crictl images -o json": exit status 89 (43.099333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-581000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-581000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-581000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (28.045333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-581000 --alsologtostderr -v=1: exit status 89 (39.557417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-581000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:56:20.202622    5206 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:56:20.202740    5206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:20.202743    5206 out.go:309] Setting ErrFile to fd 2...
	I0701 12:56:20.202746    5206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:56:20.202813    5206 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:56:20.203015    5206 out.go:303] Setting JSON to false
	I0701 12:56:20.203027    5206 mustload.go:65] Loading cluster: newest-cni-581000
	I0701 12:56:20.203221    5206 config.go:182] Loaded profile config "newest-cni-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:56:20.207273    5206 out.go:177] * The control plane node must be running for this command
	I0701 12:56:20.211382    5206 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-581000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-581000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (27.696166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-581000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (27.873916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.27.3/json-events 6.11
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
19 TestBinaryMirror 0.33
30 TestHyperKitDriverInstallOrUpdate 8.17
33 TestErrorSpam/setup 28.39
34 TestErrorSpam/start 0.33
35 TestErrorSpam/status 0.26
36 TestErrorSpam/pause 0.63
37 TestErrorSpam/unpause 0.58
38 TestErrorSpam/stop 12.25
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 45.66
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 35.57
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.06
49 TestFunctional/serial/CacheCmd/cache/add_remote 5.84
50 TestFunctional/serial/CacheCmd/cache/add_local 1.16
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 1.29
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.45
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
58 TestFunctional/serial/ExtraConfig 34.36
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.65
61 TestFunctional/serial/LogsFileCmd 0.67
62 TestFunctional/serial/InvalidService 4.03
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 14.5
66 TestFunctional/parallel/DryRun 0.21
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.27
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 25.12
76 TestFunctional/parallel/SSHCmd 0.15
77 TestFunctional/parallel/CpCmd 0.32
79 TestFunctional/parallel/FileSync 0.07
80 TestFunctional/parallel/CertSync 0.46
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
88 TestFunctional/parallel/License 0.23
90 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
91 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
93 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.12
94 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
95 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
96 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
97 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
100 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
101 TestFunctional/parallel/ServiceCmd/List 0.34
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
104 TestFunctional/parallel/ServiceCmd/Format 0.11
105 TestFunctional/parallel/ServiceCmd/URL 0.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
107 TestFunctional/parallel/ProfileCmd/profile_list 0.15
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
109 TestFunctional/parallel/MountCmd/any-port 5.29
111 TestFunctional/parallel/MountCmd/VerifyCleanup 0.85
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.18
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.09
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.24
119 TestFunctional/parallel/ImageCommands/Setup 2.03
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.24
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.56
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.84
123 TestFunctional/parallel/DockerEnv/bash 0.42
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.6
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 28.59
138 TestImageBuild/serial/NormalBuild 1.52
140 TestImageBuild/serial/BuildWithDockerIgnore 0.11
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 97.43
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.32
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.25
151 TestJSONOutput/start/Command 82.68
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.27
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.22
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.32
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 62.65
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.14
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
258 TestStartStop/group/old-k8s-version/serial/Stop 0.06
259 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
273 TestStartStop/group/no-preload/serial/Stop 0.06
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
278 TestStartStop/group/embed-certs/serial/Stop 0.06
279 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.08
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
300 TestStartStop/group/newest-cni/serial/Stop 0.06
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-035000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-035000: exit status 85 (99.339833ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-035000 | jenkins | v1.30.1 | 01 Jul 23 12:34 PDT |          |
	|         | -p download-only-035000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:34:45
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:34:45.027873    1463 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:34:45.027998    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:45.028002    1463 out.go:309] Setting ErrFile to fd 2...
	I0701 12:34:45.028004    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:45.028067    1463 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	W0701 12:34:45.028128    1463 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: no such file or directory
	I0701 12:34:45.029199    1463 out.go:303] Setting JSON to true
	I0701 12:34:45.046110    1463 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":255,"bootTime":1688239830,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:34:45.046181    1463 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:34:45.054197    1463 out.go:97] [download-only-035000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:34:45.058155    1463 out.go:169] MINIKUBE_LOCATION=15452
	I0701 12:34:45.054323    1463 notify.go:220] Checking for updates...
	W0701 12:34:45.054353    1463 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 12:34:45.067108    1463 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:34:45.070192    1463 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:34:45.073147    1463 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:34:45.076150    1463 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	W0701 12:34:45.082211    1463 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 12:34:45.082483    1463 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:34:45.087124    1463 out.go:97] Using the qemu2 driver based on user configuration
	I0701 12:34:45.087132    1463 start.go:297] selected driver: qemu2
	I0701 12:34:45.087134    1463 start.go:944] validating driver "qemu2" against <nil>
	I0701 12:34:45.087209    1463 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0701 12:34:45.090122    1463 out.go:169] Automatically selected the socket_vmnet network
	I0701 12:34:45.096604    1463 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0701 12:34:45.096702    1463 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 12:34:45.096751    1463 cni.go:84] Creating CNI manager for ""
	I0701 12:34:45.096771    1463 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0701 12:34:45.096779    1463 start_flags.go:319] config:
	{Name:download-only-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:34:45.102651    1463 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:34:45.107165    1463 out.go:97] Downloading VM boot image ...
	I0701 12:34:45.107195    1463 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/iso/arm64/minikube-v1.30.1-1687455737-16703-arm64.iso
	I0701 12:34:50.963219    1463 out.go:97] Starting control plane node download-only-035000 in cluster download-only-035000
	I0701 12:34:50.963239    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:51.017855    1463 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:34:51.017905    1463 cache.go:57] Caching tarball of preloaded images
	I0701 12:34:51.018079    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:51.022153    1463 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0701 12:34:51.022160    1463 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:51.094968    1463 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0701 12:34:57.097110    1463 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:57.097243    1463 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:57.738151    1463 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0701 12:34:57.738320    1463 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/download-only-035000/config.json ...
	I0701 12:34:57.738341    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/download-only-035000/config.json: {Name:mk9f4d28b217eec16b8700d8bd45a47dc566dc7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 12:34:57.738570    1463 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0701 12:34:57.738742    1463 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0701 12:34:58.074786    1463 out.go:169] 
	W0701 12:34:58.079787    1463 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430 0x105b18430] Decompressors:map[bz2:0x14000057df8 gz:0x14000057ee0 tar:0x14000057e00 tar.bz2:0x14000057e10 tar.gz:0x14000057e20 tar.xz:0x14000057ea0 tar.zst:0x14000057ed0 tbz2:0x14000057e10 tgz:0x14000057e20 txz:0x14000057ea0 tzst:0x14000057ed0 xz:0x14000057ee8 zip:0x14000057ef0 zst:0x14000057f00] Getters:map[file:0x14000ad8bf0 http:0x14000a94140 https:0x14000a941e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0701 12:34:58.079812    1463 out_reason.go:110] 
	W0701 12:34:58.085888    1463 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 12:34:58.089705    1463 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-035000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (6.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-035000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 : (6.114285542s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (6.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-035000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-035000: exit status 85 (74.828667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-035000 | jenkins | v1.30.1 | 01 Jul 23 12:34 PDT |          |
	|         | -p download-only-035000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-035000 | jenkins | v1.30.1 | 01 Jul 23 12:34 PDT |          |
	|         | -p download-only-035000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/01 12:34:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 12:34:58.286563    1487 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:34:58.286664    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:58.286668    1487 out.go:309] Setting ErrFile to fd 2...
	I0701 12:34:58.286670    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:34:58.286737    1487 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	W0701 12:34:58.286795    1487 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1041/.minikube/config/config.json: no such file or directory
	I0701 12:34:58.287693    1487 out.go:303] Setting JSON to true
	I0701 12:34:58.302720    1487 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":268,"bootTime":1688239830,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:34:58.302792    1487 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:34:58.308241    1487 out.go:97] [download-only-035000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:34:58.311969    1487 out.go:169] MINIKUBE_LOCATION=15452
	I0701 12:34:58.308337    1487 notify.go:220] Checking for updates...
	I0701 12:34:58.318154    1487 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:34:58.319627    1487 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:34:58.323133    1487 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:34:58.326158    1487 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	W0701 12:34:58.332201    1487 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 12:34:58.332456    1487 config.go:182] Loaded profile config "download-only-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0701 12:34:58.332479    1487 start.go:852] api.Load failed for download-only-035000: filestore "download-only-035000": Docker machine "download-only-035000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0701 12:34:58.332525    1487 driver.go:373] Setting default libvirt URI to qemu:///system
	W0701 12:34:58.332539    1487 start.go:852] api.Load failed for download-only-035000: filestore "download-only-035000": Docker machine "download-only-035000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0701 12:34:58.336211    1487 out.go:97] Using the qemu2 driver based on existing profile
	I0701 12:34:58.336219    1487 start.go:297] selected driver: qemu2
	I0701 12:34:58.336221    1487 start.go:944] validating driver "qemu2" against &{Name:download-only-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:34:58.338010    1487 cni.go:84] Creating CNI manager for ""
	I0701 12:34:58.338024    1487 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0701 12:34:58.338029    1487 start_flags.go:319] config:
	{Name:download-only-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-035000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:34:58.341776    1487 iso.go:125] acquiring lock: {Name:mkdb0bd8f1995109032ab9cd150d0fc0a2c514a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 12:34:58.345198    1487 out.go:97] Starting control plane node download-only-035000 in cluster download-only-035000
	I0701 12:34:58.345212    1487 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:34:58.397348    1487 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:34:58.397363    1487 cache.go:57] Caching tarball of preloaded images
	I0701 12:34:58.397517    1487 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:34:58.402714    1487 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0701 12:34:58.402723    1487 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:34:58.476525    1487 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0701 12:35:02.624064    1487 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:35:02.624207    1487 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0701 12:35:03.183008    1487 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0701 12:35:03.183071    1487 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/download-only-035000/config.json ...
	I0701 12:35:03.183326    1487 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0701 12:35:03.183530    1487 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15452-1041/.minikube/cache/darwin/arm64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-035000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-035000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-039000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-039000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-039000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.17s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.17s)

                                                
                                    
x
+
TestErrorSpam/setup (28.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-694000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-694000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 --driver=qemu2 : (28.390834667s)
--- PASS: TestErrorSpam/setup (28.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (12.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 stop: (12.08427075s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-694000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-694000 stop
--- PASS: TestErrorSpam/stop (12.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/15452-1041/.minikube/files/etc/test/nested/copy/1461/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-011000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.656153875s)
--- PASS: TestFunctional/serial/StartWithProxy (45.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-011000 --alsologtostderr -v=8: (35.569691375s)
functional_test.go:659: soft start took 35.570173875s for "functional-011000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-011000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:3.1: (2.179433917s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:3.3: (2.0559225s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 cache add registry.k8s.io/pause:latest: (1.6092955s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3902513621/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache add minikube-local-cache-test:functional-011000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache delete minikube-local-cache-test:functional-011000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-011000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (74.362125ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 cache reload: (1.055368958s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 kubectl -- --context functional-011000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-011000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-011000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.3566895s)
functional_test.go:757: restart took 34.356799667s for "functional-011000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-011000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3733708181/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-011000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-011000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-011000: exit status 115 (158.705084ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32737 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-011000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 config get cpus: exit status 14 (28.161333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 config get cpus: exit status 14 (29.577625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-011000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-011000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2124: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-011000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (106.551458ms)

                                                
                                                
-- stdout --
	* [functional-011000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:39:30.182508    2109 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:39:30.182624    2109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:30.182627    2109 out.go:309] Setting ErrFile to fd 2...
	I0701 12:39:30.182629    2109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:30.182690    2109 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:39:30.183701    2109 out.go:303] Setting JSON to false
	I0701 12:39:30.198985    2109 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":540,"bootTime":1688239830,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:39:30.199057    2109 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:39:30.202905    2109 out.go:177] * [functional-011000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0701 12:39:30.209895    2109 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:39:30.213886    2109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:39:30.209953    2109 notify.go:220] Checking for updates...
	I0701 12:39:30.216889    2109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:39:30.219837    2109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:39:30.222859    2109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:39:30.225923    2109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:39:30.229117    2109 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:39:30.229343    2109 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:39:30.233880    2109 out.go:177] * Using the qemu2 driver based on existing profile
	I0701 12:39:30.239861    2109 start.go:297] selected driver: qemu2
	I0701 12:39:30.239865    2109 start.go:944] validating driver "qemu2" against &{Name:functional-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:39:30.239909    2109 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:39:30.245899    2109 out.go:177] 
	W0701 12:39:30.249828    2109 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0701 12:39:30.252854    2109 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-011000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-011000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (105.5485ms)

                                                
                                                
-- stdout --
	* [functional-011000] minikube v1.30.1 sur Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 12:39:30.071524    2105 out.go:296] Setting OutFile to fd 1 ...
	I0701 12:39:30.071636    2105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:30.071638    2105 out.go:309] Setting ErrFile to fd 2...
	I0701 12:39:30.071641    2105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0701 12:39:30.071849    2105 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
	I0701 12:39:30.073755    2105 out.go:303] Setting JSON to false
	I0701 12:39:30.089942    2105 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":540,"bootTime":1688239830,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0701 12:39:30.090023    2105 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0701 12:39:30.094962    2105 out.go:177] * [functional-011000] minikube v1.30.1 sur Darwin 13.4.1 (arm64)
	I0701 12:39:30.101915    2105 out.go:177]   - MINIKUBE_LOCATION=15452
	I0701 12:39:30.101928    2105 notify.go:220] Checking for updates...
	I0701 12:39:30.106890    2105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	I0701 12:39:30.109926    2105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0701 12:39:30.112846    2105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 12:39:30.115874    2105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	I0701 12:39:30.118915    2105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 12:39:30.120436    2105 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0701 12:39:30.120731    2105 driver.go:373] Setting default libvirt URI to qemu:///system
	I0701 12:39:30.124861    2105 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0701 12:39:30.131743    2105 start.go:297] selected driver: qemu2
	I0701 12:39:30.131749    2105 start.go:944] validating driver "qemu2" against &{Name:functional-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0701 12:39:30.131792    2105 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 12:39:30.137873    2105 out.go:177] 
	W0701 12:39:30.141925    2105 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0701 12:39:30.145831    2105 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c6de41e7-048e-46f4-99d8-61d3c3eeaee5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016720541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-011000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-011000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-011000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-011000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f5faa226-d560-49ce-85df-518fa1c70fe4] Pending
helpers_test.go:344: "sp-pod" [f5faa226-d560-49ce-85df-518fa1c70fe4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f5faa226-d560-49ce-85df-518fa1c70fe4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010162s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-011000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-011000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-011000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a062df4a-4dea-4b1c-a1ff-03988d298c63] Pending
helpers_test.go:344: "sp-pod" [a062df4a-4dea-4b1c-a1ff-03988d298c63] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a062df4a-4dea-4b1c-a1ff-03988d298c63] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.014517417s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-011000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh -n functional-011000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 cp functional-011000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1909065692/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh -n functional-011000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1461/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /etc/test/nested/copy/1461/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1461.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /etc/ssl/certs/1461.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1461.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /usr/share/ca-certificates/1461.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14612.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /etc/ssl/certs/14612.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14612.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /usr/share/ca-certificates/14612.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-011000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "sudo systemctl is-active crio": exit status 1 (85.533333ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1952: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-011000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2f601e72-3043-4fc9-9593-266314a684d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2f601e72-3043-4fc9-9593-266314a684d6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.006095708s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-011000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.137.4 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-011000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-011000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-011000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-lqbz6" [0388c9f1-c13a-4397-a37e-1e0d13744034] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-lqbz6" [0388c9f1-c13a-4397-a37e-1e0d13744034] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.013156541s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service list -o json
functional_test.go:1493: Took "316.528666ms" to run "out/minikube-darwin-arm64 -p functional-011000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31943
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31943
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "120.159458ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.770417ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "119.255208ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.66825ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1688240358196898000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1688240358196898000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1688240358196898000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001/test-1688240358196898000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.960083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  1 19:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  1 19:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  1 19:39 test-1688240358196898000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh cat /mount-9p/test-1688240358196898000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-011000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [09a9d85b-3d5c-41b1-ab27-70339046efaa] Pending
helpers_test.go:344: "busybox-mount" [09a9d85b-3d5c-41b1-ab27-70339046efaa] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [09a9d85b-3d5c-41b1-ab27-70339046efaa] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [09a9d85b-3d5c-41b1-ab27-70339046efaa] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007779541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-011000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port603677003/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T" /mount1: exit status 80 (72.710166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_a64d9ebfb47bbf858aff6c3e241ab73c486ee6a2_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-011000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2053979480/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-011000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-011000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-011000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-011000 image ls --format short --alsologtostderr:
I0701 12:39:52.053182    2306 out.go:296] Setting OutFile to fd 1 ...
I0701 12:39:52.053304    2306 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.053307    2306 out.go:309] Setting ErrFile to fd 2...
I0701 12:39:52.053310    2306 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.053384    2306 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:39:52.053745    2306 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.053804    2306 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.055019    2306 ssh_runner.go:195] Run: systemctl --version
I0701 12:39:52.055032    2306 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
I0701 12:39:52.093233    2306 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-011000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.27.3           | 39dfb036b0986 | 115MB  |
| registry.k8s.io/kube-scheduler              | v1.27.3           | bcb9e554eaab6 | 56.2MB |
| gcr.io/google-containers/addon-resizer      | functional-011000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | ab3683b584ae5 | 107MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-011000 | 856f9032548b4 | 30B    |
| docker.io/library/nginx                     | alpine            | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-proxy                  | v1.27.3           | fb73e92641fd5 | 66.5MB |
| docker.io/library/nginx                     | latest            | 2d21d843073b4 | 192MB  |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-011000 image ls --format table --alsologtostderr:
I0701 12:39:52.229844    2316 out.go:296] Setting OutFile to fd 1 ...
I0701 12:39:52.229983    2316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.229986    2316 out.go:309] Setting ErrFile to fd 2...
I0701 12:39:52.229989    2316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.230061    2316 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:39:52.230544    2316 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.230601    2316 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.231383    2316 ssh_runner.go:195] Run: systemctl --version
I0701 12:39:52.231393    2316 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
I0701 12:39:52.271006    2316 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-011000 image ls --format json --alsologtostderr:
[{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"115000000"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432e
f754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-011000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540"
,"repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"2d21d843073b4df6a03022861da4cb59f7116c864fe90b3b5db3b90e1ce932d3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"856f9032548b4018f74c97234242899c727ed72adf590a5761d3bbe3f01e1cb4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-011000"],"size":"30"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:
latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-011000 image ls --format json --alsologtostderr:
I0701 12:39:52.146449    2312 out.go:296] Setting OutFile to fd 1 ...
I0701 12:39:52.146612    2312 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.146617    2312 out.go:309] Setting ErrFile to fd 2...
I0701 12:39:52.146620    2312 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.146702    2312 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:39:52.147127    2312 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.147187    2312 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.148196    2312 ssh_runner.go:195] Run: systemctl --version
I0701 12:39:52.148209    2312 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
I0701 12:39:52.186357    2312 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh pgrep buildkitd: exit status 1 (77.760709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image build -t localhost/my-image:functional-011000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 image build -t localhost/my-image:functional-011000 testdata/build --alsologtostderr: (2.083991792s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-011000 image build -t localhost/my-image:functional-011000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in cfe0c03df029
Removing intermediate container cfe0c03df029
---> 06c5db482abf
Step 3/3 : ADD content.txt /
---> 8cb802c0947b
Successfully built 8cb802c0947b
Successfully tagged localhost/my-image:functional-011000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-011000 image build -t localhost/my-image:functional-011000 testdata/build --alsologtostderr:
I0701 12:39:52.169423    2314 out.go:296] Setting OutFile to fd 1 ...
I0701 12:39:52.169640    2314 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.169643    2314 out.go:309] Setting ErrFile to fd 2...
I0701 12:39:52.169645    2314 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 12:39:52.169723    2314 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1041/.minikube/bin
I0701 12:39:52.170114    2314 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.170511    2314 config.go:182] Loaded profile config "functional-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0701 12:39:52.171272    2314 ssh_runner.go:195] Run: systemctl --version
I0701 12:39:52.171282    2314 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1041/.minikube/machines/functional-011000/id_rsa Username:docker}
I0701 12:39:52.206989    2314 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3059036174.tar
I0701 12:39:52.207051    2314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0701 12:39:52.210414    2314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3059036174.tar
I0701 12:39:52.212126    2314 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3059036174.tar: stat -c "%s %y" /var/lib/minikube/build/build.3059036174.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3059036174.tar': No such file or directory
I0701 12:39:52.212148    2314 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3059036174.tar --> /var/lib/minikube/build/build.3059036174.tar (3072 bytes)
I0701 12:39:52.220521    2314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3059036174
I0701 12:39:52.224324    2314 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3059036174 -xf /var/lib/minikube/build/build.3059036174.tar
I0701 12:39:52.227440    2314 docker.go:339] Building image: /var/lib/minikube/build/build.3059036174
I0701 12:39:52.227488    2314 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-011000 /var/lib/minikube/build/build.3059036174
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0701 12:39:54.210258    2314 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-011000 /var/lib/minikube/build/build.3059036174: (1.982792667s)
I0701 12:39:54.210322    2314 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3059036174
I0701 12:39:54.213378    2314 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3059036174.tar
I0701 12:39:54.216282    2314 build_images.go:207] Built localhost/my-image:functional-011000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3059036174.tar
I0701 12:39:54.216299    2314 build_images.go:123] succeeded building to: functional-011000
I0701 12:39:54.216301    2314 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.983655042s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-011000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr: (2.102205291s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr
2023/07/01 12:39:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr: (1.483919375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.786687333s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-011000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 image load --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr: (1.915156209s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-011000 docker-env) && out/minikube-darwin-arm64 status -p functional-011000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-011000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image save gcr.io/google-containers/addon-resizer:functional-011000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image rm gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-011000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 image save --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-011000 image save --daemon gcr.io/google-containers/addon-resizer:functional-011000 --alsologtostderr: (1.523358417s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-011000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-011000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-011000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-011000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-933000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-933000 --driver=qemu2 : (28.592745s)
--- PASS: TestImageBuild/serial/Setup (28.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-933000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-933000: (1.517948167s)
--- PASS: TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-933000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-933000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-673000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-673000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m37.426708125s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons enable ingress --alsologtostderr -v=5: (17.322264916s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-673000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-139000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0701 12:43:45.508946    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.515745    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.527798    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.549884    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.591942    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.673981    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:45.836044    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:46.158133    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:46.800316    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:48.082460    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:50.644634    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:43:55.765104    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:44:06.007295    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
E0701 12:44:26.489359    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-139000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m22.681065833s)
--- PASS: TestJSONOutput/start/Command (82.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.27s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-139000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.27s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-139000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-139000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-139000 --output=json --user=testUser: (12.079268667s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-015000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-015000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.3835ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4229c46b-a9d5-401f-a377-26e82df46eb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-015000] minikube v1.30.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c5cfafd-e0b2-47c2-89a2-1edbad54e671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"30884302-bded-4f16-b356-0be397fcbaf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig"}}
	{"specversion":"1.0","id":"28c71634-85bf-41c3-aeed-abd977c38b36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1ed6f05e-c6b1-4a4b-8f53-b376ad3e0950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bd50b216-c5a8-4a4d-85f2-3658f07cee19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube"}}
	{"specversion":"1.0","id":"20464587-4c35-41cf-b91d-fd5385baf6d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f6815d72-797d-494b-9138-a4bfa0e93cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-015000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (62.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-262000 --driver=qemu2 
E0701 12:45:07.450923    1461 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1041/.minikube/profiles/functional-011000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-262000 --driver=qemu2 : (30.183264375s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-264000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-264000 --driver=qemu2 : (31.692070542s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-262000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-264000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-264000
helpers_test.go:175: Cleaning up "first-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-262000
--- PASS: TestMinikubeProfile (62.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-739000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.917792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-739000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1041/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1041/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.425334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-739000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-739000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-739000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.284166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-739000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-326000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-326000 -n old-k8s-version-326000: exit status 7 (27.327584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-326000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-146000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-146000 -n no-preload-146000: exit status 7 (28.036541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-146000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-808000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-808000 -n embed-certs-808000: exit status 7 (27.012625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-808000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-457000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-457000 -n default-k8s-diff-port-457000: exit status 7 (27.518042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-457000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-581000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-581000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-581000 -n newest-cni-581000: exit status 7 (27.686625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-581000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2838750774/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (72.252458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (74.456666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.552584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (80.113459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (113.892791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (73.482125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.607292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (72.021917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-011000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-011000 ssh "sudo umount -f /mount-9p": exit status 1 (69.936958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-011000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-011000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2838750774/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.74s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-674000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-674000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-674000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-674000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-674000"

                                                
                                                
----------------------- debugLogs end: cilium-674000 [took: 2.1882935s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-674000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-674000
--- SKIP: TestNetworkPlugins/group/cilium (2.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-021000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard