Test Report: QEMU_macOS 17194

                    
                      03b3a1191a73942c676aa26934a5795f62561627:2023-09-12:30988
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 28.36
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.87
22 TestAddons/Setup 44.34
23 TestCertOptions 10.1
24 TestCertExpiration 195.31
25 TestDockerFlags 9.96
26 TestForceSystemdFlag 10.87
27 TestForceSystemdEnv 9.97
72 TestFunctional/parallel/ServiceCmdConnect 35.74
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
139 TestImageBuild/serial/BuildWithBuildArg 1.06
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 54.07
183 TestMountStart/serial/StartWithMountFirst 10.51
186 TestMultiNode/serial/FreshStart2Nodes 10.09
187 TestMultiNode/serial/DeployApp2Nodes 101.51
188 TestMultiNode/serial/PingHostFrom2Pods 0.09
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.37
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.24
198 TestMultiNode/serial/ValidateNameConflict 20.37
202 TestPreload 10.24
204 TestScheduledStopUnix 10.09
205 TestSkaffold 13.45
208 TestRunningBinaryUpgrade 149.9
210 TestKubernetesUpgrade 15.38
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.14
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
225 TestStoppedBinaryUpgrade/Setup 162.59
227 TestPause/serial/Start 9.75
237 TestNoKubernetes/serial/StartWithK8s 9.9
238 TestNoKubernetes/serial/StartWithStopK8s 5.47
239 TestNoKubernetes/serial/Start 5.48
243 TestNoKubernetes/serial/StartNoArgs 5.47
245 TestNetworkPlugins/group/auto/Start 9.9
246 TestNetworkPlugins/group/flannel/Start 9.81
247 TestNetworkPlugins/group/enable-default-cni/Start 9.68
248 TestNetworkPlugins/group/kindnet/Start 9.8
249 TestNetworkPlugins/group/bridge/Start 9.76
250 TestNetworkPlugins/group/kubenet/Start 9.68
251 TestNetworkPlugins/group/custom-flannel/Start 9.66
252 TestNetworkPlugins/group/calico/Start 9.7
253 TestNetworkPlugins/group/false/Start 9.78
255 TestStartStop/group/old-k8s-version/serial/FirstStart 11.53
256 TestStoppedBinaryUpgrade/Upgrade 3.05
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
259 TestStartStop/group/no-preload/serial/FirstStart 9.89
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
264 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
265 TestStartStop/group/no-preload/serial/DeployApp 0.09
266 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
269 TestStartStop/group/no-preload/serial/SecondStart 5.27
270 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
271 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
272 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
273 TestStartStop/group/old-k8s-version/serial/Pause 0.1
275 TestStartStop/group/embed-certs/serial/FirstStart 10.14
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
279 TestStartStop/group/no-preload/serial/Pause 0.1
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.77
282 TestStartStop/group/embed-certs/serial/DeployApp 0.09
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 5.22
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.29
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
295 TestStartStop/group/embed-certs/serial/Pause 0.1
297 TestStartStop/group/newest-cni/serial/FirstStart 10.14
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.25
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (28.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (28.361026708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7cce4c1-f149-451f-84c2-87c2d8f08cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-684000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"370f6e2d-ac44-497e-9ecd-e4ce65425d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17194"}}
	{"specversion":"1.0","id":"531c01a8-79e7-45ea-b9f0-e401cf35847c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig"}}
	{"specversion":"1.0","id":"50360d5d-0b1d-4142-9186-669f80abbf41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"61ae91e6-84e8-497f-96b7-5f6c92b2e800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f773a285-d40e-4e4e-83dc-3182f64e2a79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube"}}
	{"specversion":"1.0","id":"e3a1c7a1-5a13-4c8c-ae80-db5ed8c87103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"3b8bc592-569f-4dee-9807-c530e994a055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0aa1ef1-1f61-444a-b11f-107bc058bc1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7558511b-ca11-4556-adfb-43f25c4df8df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"47c14a04-c81c-4de0-be89-199c3eadd692","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-684000 in cluster download-only-684000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"08f7fb1c-bdde-4677-b70e-e3f45343498d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e6d57f1-b2e1-442e-894f-b433d11b72c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20] Decompressors:map[bz2:0x140001c33f0 gz:0x140001c33f8 tar:0x140001c33a0 tar.bz2:0x140001c33b0 tar.gz:0x140001c33c0 tar.xz:0x140001c33d0 tar.zst:0x140001c33e0 tbz2:0x140001c33b0 tgz:0x140001
c33c0 txz:0x140001c33d0 tzst:0x140001c33e0 xz:0x140001c3400 zip:0x140001c3410 zst:0x140001c3408] Getters:map[file:0x14000708030 http:0x14000c84aa0 https:0x14000c84af0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"964fc426-8c92-4499-9d50-1b5e19780f1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:42:56.737560    1472 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:42:56.737699    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:42:56.737702    1472 out.go:309] Setting ErrFile to fd 2...
	I0912 14:42:56.737705    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:42:56.737812    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	W0912 14:42:56.737886    1472 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: no such file or directory
	I0912 14:42:56.739019    1472 out.go:303] Setting JSON to true
	I0912 14:42:56.755468    1472 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":750,"bootTime":1694554226,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:42:56.755560    1472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:42:56.760997    1472 out.go:97] [download-only-684000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:42:56.763919    1472 out.go:169] MINIKUBE_LOCATION=17194
	W0912 14:42:56.761152    1472 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 14:42:56.761213    1472 notify.go:220] Checking for updates...
	I0912 14:42:56.770986    1472 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:42:56.774011    1472 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:42:56.776978    1472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:42:56.779959    1472 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	W0912 14:42:56.785959    1472 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:42:56.786173    1472 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:42:56.791066    1472 out.go:97] Using the qemu2 driver based on user configuration
	I0912 14:42:56.791085    1472 start.go:298] selected driver: qemu2
	I0912 14:42:56.791088    1472 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:42:56.791138    1472 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:42:56.794917    1472 out.go:169] Automatically selected the socket_vmnet network
	I0912 14:42:56.800497    1472 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:42:56.800591    1472 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:42:56.800648    1472 cni.go:84] Creating CNI manager for ""
	I0912 14:42:56.800665    1472 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:42:56.800669    1472 start_flags.go:321] config:
	{Name:download-only-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:42:56.806077    1472 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:42:56.809997    1472 out.go:97] Downloading VM boot image ...
	I0912 14:42:56.810016    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso
	I0912 14:43:11.357895    1472 out.go:97] Starting control plane node download-only-684000 in cluster download-only-684000
	I0912 14:43:11.357914    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:11.470067    1472 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 14:43:11.470079    1472 cache.go:57] Caching tarball of preloaded images
	I0912 14:43:11.470313    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:11.473020    1472 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0912 14:43:11.473030    1472 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:11.693035    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 14:43:24.036976    1472 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:24.037096    1472 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:24.679356    1472 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 14:43:24.679555    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/download-only-684000/config.json ...
	I0912 14:43:24.679572    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/download-only-684000/config.json: {Name:mk08a8eacb95eb27dd883eabd39b74e7ba802715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:43:24.679786    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:24.680001    1472 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0912 14:43:25.030007    1472 out.go:169] 
	W0912 14:43:25.031919    1472 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20] Decompressors:map[bz2:0x140001c33f0 gz:0x140001c33f8 tar:0x140001c33a0 tar.bz2:0x140001c33b0 tar.gz:0x140001c33c0 tar.xz:0x140001c33d0 tar.zst:0x140001c33e0 tbz2:0x140001c33b0 tgz:0x140001c33c0 txz:0x140001c33d0 tzst:0x140001c33e0 xz:0x140001c3400 zip:0x140001c3410 zst:0x140001c3408] Getters:map[file:0x14000708030 http:0x14000c84aa0 https:0x14000c84af0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0912 14:43:25.031946    1472 out_reason.go:110] 
	W0912 14:43:25.038983    1472 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:43:25.042951    1472 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-684000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (28.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-677000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-677000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.70544875s)

                                                
                                                
-- stdout --
	* [offline-docker-677000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-677000 in cluster offline-docker-677000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:16.629129    2978 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:57:16.629280    2978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:16.629284    2978 out.go:309] Setting ErrFile to fd 2...
	I0912 14:57:16.629287    2978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:16.629412    2978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:57:16.630642    2978 out.go:303] Setting JSON to false
	I0912 14:57:16.647492    2978 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1610,"bootTime":1694554226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:57:16.647591    2978 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:57:16.651791    2978 out.go:177] * [offline-docker-677000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:57:16.659821    2978 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:57:16.659876    2978 notify.go:220] Checking for updates...
	I0912 14:57:16.666832    2978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:57:16.669808    2978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:57:16.672775    2978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:57:16.675860    2978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:57:16.678809    2978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:57:16.682136    2978 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:16.682192    2978 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:57:16.685781    2978 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:57:16.692774    2978 start.go:298] selected driver: qemu2
	I0912 14:57:16.692781    2978 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:57:16.692793    2978 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:57:16.694630    2978 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:57:16.697766    2978 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:57:16.700755    2978 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:57:16.700778    2978 cni.go:84] Creating CNI manager for ""
	I0912 14:57:16.700785    2978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:57:16.700791    2978 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:57:16.700799    2978 start_flags.go:321] config:
	{Name:offline-docker-677000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:57:16.705290    2978 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:57:16.712810    2978 out.go:177] * Starting control plane node offline-docker-677000 in cluster offline-docker-677000
	I0912 14:57:16.716772    2978 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:57:16.716798    2978 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:57:16.716808    2978 cache.go:57] Caching tarball of preloaded images
	I0912 14:57:16.716875    2978 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:57:16.716881    2978 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:57:16.716943    2978 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/offline-docker-677000/config.json ...
	I0912 14:57:16.716955    2978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/offline-docker-677000/config.json: {Name:mkd45738fc0869d5d812880e5ebeaf1b0589d344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:57:16.717150    2978 start.go:365] acquiring machines lock for offline-docker-677000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:16.717180    2978 start.go:369] acquired machines lock for "offline-docker-677000" in 22.125µs
	I0912 14:57:16.717190    2978 start.go:93] Provisioning new machine with config: &{Name:offline-docker-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:16.717226    2978 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:16.720778    2978 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:16.734726    2978 start.go:159] libmachine.API.Create for "offline-docker-677000" (driver="qemu2")
	I0912 14:57:16.734766    2978 client.go:168] LocalClient.Create starting
	I0912 14:57:16.734832    2978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:16.734857    2978 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:16.734873    2978 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:16.734913    2978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:16.734931    2978 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:16.734937    2978 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:16.735263    2978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:16.850651    2978 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:16.967873    2978 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:16.967887    2978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:16.968031    2978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:16.976936    2978 main.go:141] libmachine: STDOUT: 
	I0912 14:57:16.976966    2978 main.go:141] libmachine: STDERR: 
	I0912 14:57:16.977040    2978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2 +20000M
	I0912 14:57:16.984939    2978 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:16.984955    2978 main.go:141] libmachine: STDERR: 
	I0912 14:57:16.984986    2978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:16.984994    2978 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:16.985038    2978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:37:57:ac:a0:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:16.986877    2978 main.go:141] libmachine: STDOUT: 
	I0912 14:57:16.986892    2978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:16.986914    2978 client.go:171] LocalClient.Create took 252.147125ms
	I0912 14:57:18.987033    2978 start.go:128] duration metric: createHost completed in 2.269828333s
	I0912 14:57:18.987051    2978 start.go:83] releasing machines lock for "offline-docker-677000", held for 2.269913s
	W0912 14:57:18.987060    2978 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:18.998932    2978 out.go:177] * Deleting "offline-docker-677000" in qemu2 ...
	W0912 14:57:19.006295    2978 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:19.006305    2978 start.go:703] Will try again in 5 seconds ...
	I0912 14:57:24.008305    2978 start.go:365] acquiring machines lock for offline-docker-677000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:24.008489    2978 start.go:369] acquired machines lock for "offline-docker-677000" in 146.916µs
	I0912 14:57:24.008541    2978 start.go:93] Provisioning new machine with config: &{Name:offline-docker-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:24.008640    2978 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:24.016505    2978 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:24.046246    2978 start.go:159] libmachine.API.Create for "offline-docker-677000" (driver="qemu2")
	I0912 14:57:24.046276    2978 client.go:168] LocalClient.Create starting
	I0912 14:57:24.046389    2978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:24.046450    2978 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:24.046466    2978 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:24.046520    2978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:24.046548    2978 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:24.046559    2978 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:24.047027    2978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:24.167778    2978 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:24.245757    2978 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:24.245763    2978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:24.245914    2978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:24.254408    2978 main.go:141] libmachine: STDOUT: 
	I0912 14:57:24.254423    2978 main.go:141] libmachine: STDERR: 
	I0912 14:57:24.254473    2978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2 +20000M
	I0912 14:57:24.261569    2978 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:24.261583    2978 main.go:141] libmachine: STDERR: 
	I0912 14:57:24.261592    2978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:24.261598    2978 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:24.261644    2978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ab:23:d4:3e:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/offline-docker-677000/disk.qcow2
	I0912 14:57:24.263153    2978 main.go:141] libmachine: STDOUT: 
	I0912 14:57:24.263171    2978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:24.263185    2978 client.go:171] LocalClient.Create took 216.90975ms
	I0912 14:57:26.265328    2978 start.go:128] duration metric: createHost completed in 2.256705708s
	I0912 14:57:26.265388    2978 start.go:83] releasing machines lock for "offline-docker-677000", held for 2.256927417s
	W0912 14:57:26.265799    2978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:26.276039    2978 out.go:177] 
	W0912 14:57:26.280272    2978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:26.280315    2978 out.go:239] * 
	* 
	W0912 14:57:26.283252    2978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:57:26.293083    2978 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-677000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-09-12 14:57:26.313049 -0700 PDT m=+869.673254501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-677000 -n offline-docker-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-677000 -n offline-docker-677000: exit status 7 (64.415292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-677000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-677000
--- FAIL: TestOffline (9.87s)

                                                
                                    
x
+
TestAddons/Setup (44.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-428000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-428000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.339182584s)

                                                
                                                
-- stdout --
	* [addons-428000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-428000 in cluster addons-428000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/registry:2.8.1
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	* Verifying csi-hostpath-driver addon...
	* Verifying Kubernetes components...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:43:42.648186    1552 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:43:42.648335    1552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:43:42.648339    1552 out.go:309] Setting ErrFile to fd 2...
	I0912 14:43:42.648341    1552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:43:42.648471    1552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:43:42.649439    1552 out.go:303] Setting JSON to false
	I0912 14:43:42.664548    1552 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":796,"bootTime":1694554226,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:43:42.664637    1552 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:43:42.673649    1552 out.go:177] * [addons-428000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:43:42.677692    1552 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:43:42.677726    1552 notify.go:220] Checking for updates...
	I0912 14:43:42.680711    1552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:43:42.683680    1552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:43:42.685015    1552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:43:42.687609    1552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:43:42.690698    1552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:43:42.693860    1552 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:43:42.697671    1552 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:43:42.704657    1552 start.go:298] selected driver: qemu2
	I0912 14:43:42.704662    1552 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:43:42.704667    1552 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:43:42.706556    1552 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:43:42.709624    1552 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:43:42.712779    1552 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:43:42.712815    1552 cni.go:84] Creating CNI manager for ""
	I0912 14:43:42.712824    1552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:43:42.712833    1552 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:43:42.712840    1552 start_flags.go:321] config:
	{Name:addons-428000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-428000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0912 14:43:42.717053    1552 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:43:42.723684    1552 out.go:177] * Starting control plane node addons-428000 in cluster addons-428000
	I0912 14:43:42.727601    1552 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:43:42.727622    1552 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:43:42.727633    1552 cache.go:57] Caching tarball of preloaded images
	I0912 14:43:42.727706    1552 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:43:42.727711    1552 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:43:42.727937    1552 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/config.json ...
	I0912 14:43:42.727953    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/config.json: {Name:mk28eaffef2cbdc244811ddb0a2ad1c0e92d5450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:43:42.728192    1552 start.go:365] acquiring machines lock for addons-428000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:43:42.728313    1552 start.go:369] acquired machines lock for "addons-428000" in 114.541µs
	I0912 14:43:42.728328    1552 start.go:93] Provisioning new machine with config: &{Name:addons-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-428000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:43:42.728366    1552 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:43:42.736671    1552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 14:43:43.040855    1552 start.go:159] libmachine.API.Create for "addons-428000" (driver="qemu2")
	I0912 14:43:43.040897    1552 client.go:168] LocalClient.Create starting
	I0912 14:43:43.041044    1552 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:43:43.108999    1552 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:43:43.195752    1552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:43:43.704042    1552 main.go:141] libmachine: Creating SSH key...
	I0912 14:43:43.756140    1552 main.go:141] libmachine: Creating Disk image...
	I0912 14:43:43.756148    1552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:43:43.756335    1552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2
	I0912 14:43:43.800253    1552 main.go:141] libmachine: STDOUT: 
	I0912 14:43:43.800276    1552 main.go:141] libmachine: STDERR: 
	I0912 14:43:43.800345    1552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2 +20000M
	I0912 14:43:43.807767    1552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:43:43.807779    1552 main.go:141] libmachine: STDERR: 
	I0912 14:43:43.807795    1552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2
	I0912 14:43:43.807803    1552 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:43:43.807845    1552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:b8:71:61:13:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/disk.qcow2
	I0912 14:43:43.875092    1552 main.go:141] libmachine: STDOUT: 
	I0912 14:43:43.875118    1552 main.go:141] libmachine: STDERR: 
	I0912 14:43:43.875122    1552 main.go:141] libmachine: Attempt 0
	I0912 14:43:43.875142    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:45.877300    1552 main.go:141] libmachine: Attempt 1
	I0912 14:43:45.877396    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:47.879612    1552 main.go:141] libmachine: Attempt 2
	I0912 14:43:47.879652    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:49.881687    1552 main.go:141] libmachine: Attempt 3
	I0912 14:43:49.881699    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:51.883702    1552 main.go:141] libmachine: Attempt 4
	I0912 14:43:51.883714    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:53.885788    1552 main.go:141] libmachine: Attempt 5
	I0912 14:43:53.885823    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:55.887893    1552 main.go:141] libmachine: Attempt 6
	I0912 14:43:55.887947    1552 main.go:141] libmachine: Searching for 7e:b8:71:61:13:50 in /var/db/dhcpd_leases ...
	I0912 14:43:55.888056    1552 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0912 14:43:55.888079    1552 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:43:55.888083    1552 main.go:141] libmachine: Found match: 7e:b8:71:61:13:50
	I0912 14:43:55.888094    1552 main.go:141] libmachine: IP: 192.168.105.2
	I0912 14:43:55.888101    1552 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0912 14:43:57.909015    1552 machine.go:88] provisioning docker machine ...
	I0912 14:43:57.909074    1552 buildroot.go:166] provisioning hostname "addons-428000"
	I0912 14:43:57.910461    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:57.911189    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:57.911209    1552 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-428000 && echo "addons-428000" | sudo tee /etc/hostname
	I0912 14:43:57.993262    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-428000
	
	I0912 14:43:57.993370    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:57.993793    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:57.993812    1552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-428000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-428000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-428000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 14:43:58.056467    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 14:43:58.056490    1552 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17194-1051/.minikube CaCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17194-1051/.minikube}
	I0912 14:43:58.056502    1552 buildroot.go:174] setting up certificates
	I0912 14:43:58.056511    1552 provision.go:83] configureAuth start
	I0912 14:43:58.056517    1552 provision.go:138] copyHostCerts
	I0912 14:43:58.056674    1552 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem (1082 bytes)
	I0912 14:43:58.056981    1552 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem (1123 bytes)
	I0912 14:43:58.057122    1552 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem (1679 bytes)
	I0912 14:43:58.057274    1552 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem org=jenkins.addons-428000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-428000]
	I0912 14:43:58.146081    1552 provision.go:172] copyRemoteCerts
	I0912 14:43:58.146145    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 14:43:58.146154    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:43:58.175088    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 14:43:58.181948    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 14:43:58.188765    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 14:43:58.195598    1552 provision.go:86] duration metric: configureAuth took 139.084833ms
	I0912 14:43:58.195605    1552 buildroot.go:189] setting minikube options for container-runtime
	I0912 14:43:58.195705    1552 config.go:182] Loaded profile config "addons-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:43:58.195736    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:58.195942    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:58.195949    1552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 14:43:58.246420    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 14:43:58.246428    1552 buildroot.go:70] root file system type: tmpfs
	I0912 14:43:58.246497    1552 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 14:43:58.246544    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:58.246769    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:58.246802    1552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 14:43:58.300971    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 14:43:58.301013    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:58.301271    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:58.301286    1552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 14:43:58.653836    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 14:43:58.653856    1552 machine.go:91] provisioned docker machine in 744.82725ms
	I0912 14:43:58.653862    1552 client.go:171] LocalClient.Create took 15.613272167s
	I0912 14:43:58.653874    1552 start.go:167] duration metric: libmachine.API.Create for "addons-428000" took 15.6133415s
	I0912 14:43:58.653880    1552 start.go:300] post-start starting for "addons-428000" (driver="qemu2")
	I0912 14:43:58.653884    1552 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 14:43:58.653950    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 14:43:58.653959    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:43:58.682930    1552 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 14:43:58.684371    1552 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 14:43:58.684380    1552 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/addons for local assets ...
	I0912 14:43:58.684441    1552 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/files for local assets ...
	I0912 14:43:58.684468    1552 start.go:303] post-start completed in 30.585959ms
	I0912 14:43:58.684808    1552 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/config.json ...
	I0912 14:43:58.684965    1552 start.go:128] duration metric: createHost completed in 15.956914292s
	I0912 14:43:58.684985    1552 main.go:141] libmachine: Using SSH client type: native
	I0912 14:43:58.685196    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c70760] 0x104c72ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0912 14:43:58.685200    1552 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0912 14:43:58.734693    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694555038.451937752
	
	I0912 14:43:58.734706    1552 fix.go:206] guest clock: 1694555038.451937752
	I0912 14:43:58.734710    1552 fix.go:219] Guest: 2023-09-12 14:43:58.451937752 -0700 PDT Remote: 2023-09-12 14:43:58.684967 -0700 PDT m=+16.055889084 (delta=-233.029248ms)
	I0912 14:43:58.734721    1552 fix.go:190] guest clock delta is within tolerance: -233.029248ms
	I0912 14:43:58.734724    1552 start.go:83] releasing machines lock for "addons-428000", held for 16.006726542s
	I0912 14:43:58.735079    1552 ssh_runner.go:195] Run: cat /version.json
	I0912 14:43:58.735086    1552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 14:43:58.735088    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:43:58.735126    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:43:58.804581    1552 ssh_runner.go:195] Run: systemctl --version
	I0912 14:43:58.806634    1552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 14:43:58.808561    1552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 14:43:58.808591    1552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 14:43:58.813705    1552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 14:43:58.813712    1552 start.go:469] detecting cgroup driver to use...
	I0912 14:43:58.813813    1552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:43:58.819008    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 14:43:58.822089    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 14:43:58.825335    1552 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 14:43:58.825361    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 14:43:58.828436    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:43:58.831839    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 14:43:58.835157    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:43:58.838730    1552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 14:43:58.841711    1552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 14:43:58.844623    1552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 14:43:58.847674    1552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 14:43:58.850973    1552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:43:58.937823    1552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 14:43:58.943660    1552 start.go:469] detecting cgroup driver to use...
	I0912 14:43:58.943719    1552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 14:43:58.951171    1552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:43:58.955942    1552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 14:43:58.962021    1552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:43:58.966499    1552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:43:58.970949    1552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 14:43:59.007359    1552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:43:59.012794    1552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:43:59.018119    1552 ssh_runner.go:195] Run: which cri-dockerd
	I0912 14:43:59.019440    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 14:43:59.022135    1552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 14:43:59.027019    1552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 14:43:59.104951    1552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 14:43:59.181569    1552 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 14:43:59.181581    1552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 14:43:59.187047    1552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:43:59.257351    1552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:44:00.420018    1552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162673125s)
	I0912 14:44:00.420073    1552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 14:44:00.503844    1552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 14:44:00.585895    1552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 14:44:00.660867    1552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:44:00.742778    1552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 14:44:00.750493    1552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:44:00.835414    1552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0912 14:44:00.859362    1552 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 14:44:00.859443    1552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 14:44:00.861592    1552 start.go:537] Will wait 60s for crictl version
	I0912 14:44:00.861623    1552 ssh_runner.go:195] Run: which crictl
	I0912 14:44:00.863049    1552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 14:44:00.878608    1552 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0912 14:44:00.878674    1552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:44:00.888409    1552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:44:00.899718    1552 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0912 14:44:00.899880    1552 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0912 14:44:00.901313    1552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:44:00.904975    1552 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:44:00.905024    1552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:44:00.910036    1552 docker.go:636] Got preloaded images: 
	I0912 14:44:00.910043    1552 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0912 14:44:00.910082    1552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:44:00.912865    1552 ssh_runner.go:195] Run: which lz4
	I0912 14:44:00.914094    1552 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0912 14:44:00.915387    1552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 14:44:00.915401    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0912 14:44:02.254736    1552 docker.go:600] Took 1.340685 seconds to copy over tarball
	I0912 14:44:02.254805    1552 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 14:44:03.297492    1552 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.042690584s)
	I0912 14:44:03.297507    1552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 14:44:03.313128    1552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:44:03.316263    1552 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0912 14:44:03.321780    1552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:44:03.401557    1552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:44:04.971720    1552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.570178791s)
	I0912 14:44:04.971817    1552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:44:04.977854    1552 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 14:44:04.977866    1552 cache_images.go:84] Images are preloaded, skipping loading
	I0912 14:44:04.978121    1552 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 14:44:04.986887    1552 cni.go:84] Creating CNI manager for ""
	I0912 14:44:04.986896    1552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:44:04.986919    1552 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 14:44:04.986928    1552 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-428000 NodeName:addons-428000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 14:44:04.986993    1552 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-428000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 14:44:04.987036    1552 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-428000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-428000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 14:44:04.987086    1552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 14:44:04.990684    1552 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 14:44:04.990723    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 14:44:04.993429    1552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0912 14:44:04.998444    1552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 14:44:05.003483    1552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0912 14:44:05.008707    1552 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0912 14:44:05.009964    1552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:44:05.013416    1552 certs.go:56] Setting up /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000 for IP: 192.168.105.2
	I0912 14:44:05.013426    1552 certs.go:190] acquiring lock for shared ca certs: {Name:mk62fa2aa67693071dd0720b8deb8309ed3c8567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.013577    1552 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key
	I0912 14:44:05.083640    1552 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt ...
	I0912 14:44:05.083648    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt: {Name:mkcb67c598bcfad966fc8b880bc42f6a98d76ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.083832    1552 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key ...
	I0912 14:44:05.083836    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key: {Name:mkf091c86ee56867ed15e99778e930e786609168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.084025    1552 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key
	I0912 14:44:05.210197    1552 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt ...
	I0912 14:44:05.210201    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt: {Name:mk8cbcca40fbe983f01236f175e75e2d1df70b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.210354    1552 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key ...
	I0912 14:44:05.210357    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key: {Name:mk303f2ca5e4ba3d18cea4bbd25d78cb54303b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.210502    1552 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.key
	I0912 14:44:05.210509    1552 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.crt with IP's: []
	I0912 14:44:05.262213    1552 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.crt ...
	I0912 14:44:05.262221    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.crt: {Name:mk22092a7ce47eed4482c33b6cfa7461f4e89211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.262356    1552 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.key ...
	I0912 14:44:05.262358    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/client.key: {Name:mk75236936b17debf6f9c29336c56aededab334e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.262455    1552 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key.96055969
	I0912 14:44:05.262464    1552 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 14:44:05.371233    1552 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt.96055969 ...
	I0912 14:44:05.371242    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt.96055969: {Name:mk7cc167a9db880250c60bd730e74d5564e5629f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.371431    1552 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key.96055969 ...
	I0912 14:44:05.371434    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key.96055969: {Name:mkfa5e0ca78e348c878b09a1dec0938d7ed950ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.371522    1552 certs.go:337] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt
	I0912 14:44:05.371721    1552 certs.go:341] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key
	I0912 14:44:05.371800    1552 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.key
	I0912 14:44:05.371812    1552 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.crt with IP's: []
	I0912 14:44:05.404749    1552 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.crt ...
	I0912 14:44:05.404752    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.crt: {Name:mkb14184500ac759bffa96710f5fc18cd2bd1ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.404860    1552 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.key ...
	I0912 14:44:05.404862    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.key: {Name:mk34728dd2e747e113a6ea990d8d4d1ad0f3c4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:05.405076    1552 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 14:44:05.405097    1552 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem (1082 bytes)
	I0912 14:44:05.405115    1552 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem (1123 bytes)
	I0912 14:44:05.405136    1552 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem (1679 bytes)
	I0912 14:44:05.405421    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 14:44:05.412653    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 14:44:05.419381    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 14:44:05.427049    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/addons-428000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 14:44:05.434149    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 14:44:05.441141    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 14:44:05.447800    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 14:44:05.455171    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 14:44:05.462426    1552 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 14:44:05.469504    1552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 14:44:05.475189    1552 ssh_runner.go:195] Run: openssl version
	I0912 14:44:05.477584    1552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 14:44:05.480591    1552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:44:05.482156    1552 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:44:05.482178    1552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:44:05.483953    1552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 14:44:05.487529    1552 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 14:44:05.488896    1552 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 14:44:05.488933    1552 kubeadm.go:404] StartCluster: {Name:addons-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-428000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:44:05.488997    1552 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 14:44:05.494433    1552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 14:44:05.497283    1552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 14:44:05.499960    1552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 14:44:05.503259    1552 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 14:44:05.503273    1552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 14:44:05.524860    1552 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 14:44:05.524911    1552 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 14:44:05.578079    1552 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 14:44:05.578125    1552 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 14:44:05.578212    1552 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 14:44:05.641237    1552 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 14:44:05.649476    1552 out.go:204]   - Generating certificates and keys ...
	I0912 14:44:05.649513    1552 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 14:44:05.649562    1552 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 14:44:05.840075    1552 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 14:44:05.891118    1552 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 14:44:06.120172    1552 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 14:44:06.216025    1552 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 14:44:06.338120    1552 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 14:44:06.338186    1552 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-428000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0912 14:44:06.481610    1552 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 14:44:06.481675    1552 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-428000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0912 14:44:06.677935    1552 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 14:44:06.769432    1552 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 14:44:06.806510    1552 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 14:44:06.806537    1552 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 14:44:07.027231    1552 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 14:44:07.253609    1552 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 14:44:07.461232    1552 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 14:44:07.599787    1552 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 14:44:07.599975    1552 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 14:44:07.601051    1552 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 14:44:07.604381    1552 out.go:204]   - Booting up control plane ...
	I0912 14:44:07.604481    1552 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 14:44:07.604526    1552 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 14:44:07.604570    1552 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 14:44:07.608561    1552 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 14:44:07.609003    1552 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 14:44:07.609092    1552 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 14:44:07.692501    1552 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 14:44:11.695800    1552 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003258 seconds
	I0912 14:44:11.695858    1552 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 14:44:11.701376    1552 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 14:44:12.209452    1552 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 14:44:12.209539    1552 kubeadm.go:322] [mark-control-plane] Marking the node addons-428000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 14:44:12.714165    1552 kubeadm.go:322] [bootstrap-token] Using token: gbyef2.hhxgzbunk46xhk5x
	I0912 14:44:12.719923    1552 out.go:204]   - Configuring RBAC rules ...
	I0912 14:44:12.719988    1552 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 14:44:12.721036    1552 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 14:44:12.727675    1552 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 14:44:12.728610    1552 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 14:44:12.729610    1552 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 14:44:12.730615    1552 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 14:44:12.735012    1552 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 14:44:12.902699    1552 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 14:44:13.124746    1552 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 14:44:13.125257    1552 kubeadm.go:322] 
	I0912 14:44:13.125292    1552 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 14:44:13.125296    1552 kubeadm.go:322] 
	I0912 14:44:13.125335    1552 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 14:44:13.125337    1552 kubeadm.go:322] 
	I0912 14:44:13.125349    1552 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 14:44:13.125375    1552 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 14:44:13.125398    1552 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 14:44:13.125402    1552 kubeadm.go:322] 
	I0912 14:44:13.125425    1552 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0912 14:44:13.125429    1552 kubeadm.go:322] 
	I0912 14:44:13.125452    1552 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 14:44:13.125456    1552 kubeadm.go:322] 
	I0912 14:44:13.125479    1552 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 14:44:13.125523    1552 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 14:44:13.125560    1552 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 14:44:13.125563    1552 kubeadm.go:322] 
	I0912 14:44:13.125605    1552 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 14:44:13.125649    1552 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 14:44:13.125655    1552 kubeadm.go:322] 
	I0912 14:44:13.125697    1552 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gbyef2.hhxgzbunk46xhk5x \
	I0912 14:44:13.125757    1552 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 \
	I0912 14:44:13.125778    1552 kubeadm.go:322] 	--control-plane 
	I0912 14:44:13.125781    1552 kubeadm.go:322] 
	I0912 14:44:13.125821    1552 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 14:44:13.125825    1552 kubeadm.go:322] 
	I0912 14:44:13.125863    1552 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gbyef2.hhxgzbunk46xhk5x \
	I0912 14:44:13.125920    1552 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 
	I0912 14:44:13.125987    1552 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 14:44:13.125998    1552 cni.go:84] Creating CNI manager for ""
	I0912 14:44:13.126006    1552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:44:13.132015    1552 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 14:44:13.136071    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 14:44:13.139175    1552 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0912 14:44:13.144029    1552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 14:44:13.144094    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=addons-428000 minikube.k8s.io/updated_at=2023_09_12T14_44_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:13.144108    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:13.205426    1552 ops.go:34] apiserver oom_adj: -16
	I0912 14:44:13.205460    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:13.240406    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:13.777001    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:14.277068    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:14.777024    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:15.277034    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:15.777024    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:16.277011    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:16.777011    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:17.276999    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:17.776954    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:18.275952    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:18.776940    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:19.276154    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:19.776961    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:20.276896    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:20.776300    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:21.276904    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:21.775491    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:22.276890    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:22.776892    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:23.276845    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:23.776950    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:24.276872    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:24.775980    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:25.276798    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:25.776852    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:26.276779    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:26.776790    1552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:44:26.870484    1552 kubeadm.go:1081] duration metric: took 13.726667209s to wait for elevateKubeSystemPrivileges.
	I0912 14:44:26.870503    1552 kubeadm.go:406] StartCluster complete in 21.381999042s
	I0912 14:44:26.870531    1552 settings.go:142] acquiring lock: {Name:mke2a1c2b91a69fc9538d2ab9217887ccaa535ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:26.870690    1552 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:44:26.870889    1552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/kubeconfig: {Name:mk92e8fca531d1e53b216ab5c46209b819337697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:44:26.871177    1552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 14:44:26.871260    1552 config.go:182] Loaded profile config "addons-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:44:26.871220    1552 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0912 14:44:26.871391    1552 addons.go:69] Setting cloud-spanner=true in profile "addons-428000"
	I0912 14:44:26.871397    1552 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-428000"
	I0912 14:44:26.871407    1552 addons.go:231] Setting addon cloud-spanner=true in "addons-428000"
	I0912 14:44:26.871410    1552 addons.go:69] Setting default-storageclass=true in profile "addons-428000"
	I0912 14:44:26.871416    1552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-428000"
	I0912 14:44:26.871417    1552 addons.go:69] Setting volumesnapshots=true in profile "addons-428000"
	I0912 14:44:26.871425    1552 addons.go:69] Setting metrics-server=true in profile "addons-428000"
	I0912 14:44:26.871436    1552 addons.go:231] Setting addon volumesnapshots=true in "addons-428000"
	I0912 14:44:26.871439    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.871446    1552 addons.go:231] Setting addon metrics-server=true in "addons-428000"
	I0912 14:44:26.871448    1552 addons.go:69] Setting gcp-auth=true in profile "addons-428000"
	I0912 14:44:26.871451    1552 addons.go:69] Setting ingress-dns=true in profile "addons-428000"
	I0912 14:44:26.871466    1552 addons.go:231] Setting addon ingress-dns=true in "addons-428000"
	I0912 14:44:26.871470    1552 mustload.go:65] Loading cluster: addons-428000
	I0912 14:44:26.871393    1552 addons.go:69] Setting ingress=true in profile "addons-428000"
	I0912 14:44:26.871488    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.871494    1552 addons.go:69] Setting registry=true in profile "addons-428000"
	I0912 14:44:26.871499    1552 addons.go:231] Setting addon registry=true in "addons-428000"
	I0912 14:44:26.871506    1552 addons.go:231] Setting addon ingress=true in "addons-428000"
	I0912 14:44:26.871516    1552 addons.go:69] Setting storage-provisioner=true in profile "addons-428000"
	I0912 14:44:26.871520    1552 addons.go:231] Setting addon storage-provisioner=true in "addons-428000"
	I0912 14:44:26.871531    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.871533    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.871561    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.871737    1552 config.go:182] Loaded profile config "addons-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:44:26.871489    1552 host.go:66] Checking if "addons-428000" exists ...
	W0912 14:44:26.871818    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	W0912 14:44:26.871825    1552 addons.go:277] "addons-428000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0912 14:44:26.871408    1552 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-428000"
	I0912 14:44:26.871873    1552 host.go:66] Checking if "addons-428000" exists ...
	W0912 14:44:26.871898    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	W0912 14:44:26.871907    1552 addons.go:277] "addons-428000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0912 14:44:26.871910    1552 addons.go:467] Verifying addon ingress=true in "addons-428000"
	I0912 14:44:26.874868    1552 out.go:177] * Verifying ingress addon...
	I0912 14:44:26.871511    1552 host.go:66] Checking if "addons-428000" exists ...
	W0912 14:44:26.872015    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	I0912 14:44:26.871513    1552 addons.go:69] Setting inspektor-gadget=true in profile "addons-428000"
	W0912 14:44:26.872135    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	W0912 14:44:26.872164    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	W0912 14:44:26.872251    1552 host.go:54] host status for "addons-428000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	W0912 14:44:26.880722    1552 addons.go:277] "addons-428000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0912 14:44:26.880736    1552 addons.go:277] "addons-428000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0912 14:44:26.880745    1552 addons.go:277] "addons-428000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0912 14:44:26.880749    1552 addons.go:231] Setting addon inspektor-gadget=true in "addons-428000"
	I0912 14:44:26.881122    1552 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 14:44:26.883800    1552 out.go:177] 
	W0912 14:44:26.883804    1552 addons.go:277] "addons-428000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0912 14:44:26.886309    1552 addons.go:231] Setting addon default-storageclass=true in "addons-428000"
	I0912 14:44:26.887708    1552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:44:26.887719    1552 addons.go:467] Verifying addon metrics-server=true in "addons-428000"
	I0912 14:44:26.887741    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.890847    1552 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0912 14:44:26.890846    1552 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-428000"
	I0912 14:44:26.890954    1552 host.go:66] Checking if "addons-428000" exists ...
	I0912 14:44:26.897147    1552 out.go:177]   - Using image docker.io/registry:2.8.1
	I0912 14:44:26.898548    1552 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W0912 14:44:26.903845    1552 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	I0912 14:44:26.904050    1552 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:44:26.904553    1552 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-428000" context rescaled to 1 replicas
	I0912 14:44:26.904623    1552 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 14:44:26.911716    1552 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/monitor: connect: connection refused
	I0912 14:44:26.918727    1552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 14:44:26.922765    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:44:26.918735    1552 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 14:44:26.922803    1552 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 14:44:26.922809    1552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0912 14:44:26.922815    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:44:26.918761    1552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 14:44:26.930656    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	I0912 14:44:26.918774    1552 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	W0912 14:44:26.918747    1552 out.go:239] * 
	I0912 14:44:26.931142    1552 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 14:44:26.934821    1552 out.go:177] * Verifying Kubernetes components...
	* 
	I0912 14:44:26.938806    1552 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 14:44:26.945652    1552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 14:44:26.945665    1552 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/addons-428000/id_rsa Username:docker}
	W0912 14:44:26.939308    1552 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:44:26.954669    1552 out.go:177] 
	I0912 14:44:26.945691    1552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-428000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.34s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-379000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-379000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.814431125s)

                                                
                                                
-- stdout --
	* [cert-options-379000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-379000 in cluster cert-options-379000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-379000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-379000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-379000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-379000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-379000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.60175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-379000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-379000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-379000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-379000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-379000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (41.288ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-379000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-379000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-379000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-12 14:57:56.377629 -0700 PDT m=+899.738434667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-379000 -n cert-options-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-379000 -n cert-options-379000: exit status 7 (29.949666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-379000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-379000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-379000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.891284792s)

                                                
                                                
-- stdout --
	* [cert-expiration-050000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-050000 in cluster cert-expiration-050000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-050000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.239149541s)

                                                
                                                
-- stdout --
	* [cert-expiration-050000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-050000 in cluster cert-expiration-050000
	* Restarting existing qemu2 VM for "cert-expiration-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-050000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-050000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-050000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-050000 in cluster cert-expiration-050000
	* Restarting existing qemu2 VM for "cert-expiration-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-050000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-050000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-12 15:00:56.453188 -0700 PDT m=+1079.817590084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-050000 -n cert-expiration-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-050000 -n cert-expiration-050000: exit status 7 (71.543417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-050000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-050000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-050000
--- FAIL: TestCertExpiration (195.31s)

                                                
                                    
x
+
TestDockerFlags (9.96s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.700900667s)

                                                
                                                
-- stdout --
	* [docker-flags-564000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-564000 in cluster docker-flags-564000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:36.472401    3178 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:57:36.472516    3178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:36.472519    3178 out.go:309] Setting ErrFile to fd 2...
	I0912 14:57:36.472522    3178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:36.472659    3178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:57:36.473672    3178 out.go:303] Setting JSON to false
	I0912 14:57:36.488700    3178 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1630,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:57:36.488755    3178 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:57:36.493764    3178 out.go:177] * [docker-flags-564000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:57:36.501755    3178 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:57:36.501820    3178 notify.go:220] Checking for updates...
	I0912 14:57:36.508785    3178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:57:36.511695    3178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:57:36.514757    3178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:57:36.517816    3178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:57:36.520682    3178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:57:36.524148    3178 config.go:182] Loaded profile config "force-systemd-flag-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:36.524227    3178 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:36.524275    3178 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:57:36.528748    3178 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:57:36.535735    3178 start.go:298] selected driver: qemu2
	I0912 14:57:36.535740    3178 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:57:36.535745    3178 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:57:36.537678    3178 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:57:36.540734    3178 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:57:36.543861    3178 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0912 14:57:36.543895    3178 cni.go:84] Creating CNI manager for ""
	I0912 14:57:36.543910    3178 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:57:36.543914    3178 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:57:36.543921    3178 start_flags.go:321] config:
	{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:57:36.548296    3178 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:57:36.555757    3178 out.go:177] * Starting control plane node docker-flags-564000 in cluster docker-flags-564000
	I0912 14:57:36.559735    3178 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:57:36.559756    3178 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:57:36.559768    3178 cache.go:57] Caching tarball of preloaded images
	I0912 14:57:36.559831    3178 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:57:36.559842    3178 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:57:36.559915    3178 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/docker-flags-564000/config.json ...
	I0912 14:57:36.559928    3178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/docker-flags-564000/config.json: {Name:mkda8ab31119ab2223d2c2db01ae0aeb5e9834fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:57:36.560135    3178 start.go:365] acquiring machines lock for docker-flags-564000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:36.560169    3178 start.go:369] acquired machines lock for "docker-flags-564000" in 25.167µs
	I0912 14:57:36.560181    3178 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:36.560219    3178 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:36.568739    3178 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:36.584577    3178 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0912 14:57:36.584602    3178 client.go:168] LocalClient.Create starting
	I0912 14:57:36.584660    3178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:36.584686    3178 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:36.584696    3178 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:36.584737    3178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:36.584762    3178 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:36.584774    3178 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:36.585087    3178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:36.702317    3178 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:36.768020    3178 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:36.768030    3178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:36.768180    3178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:36.776681    3178 main.go:141] libmachine: STDOUT: 
	I0912 14:57:36.776697    3178 main.go:141] libmachine: STDERR: 
	I0912 14:57:36.776754    3178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0912 14:57:36.783970    3178 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:36.783982    3178 main.go:141] libmachine: STDERR: 
	I0912 14:57:36.783995    3178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:36.784010    3178 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:36.784040    3178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:71:12:49:8e:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:36.785487    3178 main.go:141] libmachine: STDOUT: 
	I0912 14:57:36.785500    3178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:36.785519    3178 client.go:171] LocalClient.Create took 200.914542ms
	I0912 14:57:38.787694    3178 start.go:128] duration metric: createHost completed in 2.227489s
	I0912 14:57:38.787787    3178 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.227652958s
	W0912 14:57:38.787857    3178 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:38.807118    3178 out.go:177] * Deleting "docker-flags-564000" in qemu2 ...
	W0912 14:57:38.822471    3178 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:38.822504    3178 start.go:703] Will try again in 5 seconds ...
	I0912 14:57:43.824610    3178 start.go:365] acquiring machines lock for docker-flags-564000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:43.824995    3178 start.go:369] acquired machines lock for "docker-flags-564000" in 290.5µs
	I0912 14:57:43.825101    3178 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:43.825401    3178 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:43.834781    3178 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:43.882752    3178 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0912 14:57:43.882809    3178 client.go:168] LocalClient.Create starting
	I0912 14:57:43.882941    3178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:43.883015    3178 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:43.883040    3178 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:43.883125    3178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:43.883176    3178 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:43.883191    3178 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:43.883853    3178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:44.013453    3178 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:44.088914    3178 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:44.088927    3178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:44.089068    3178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:44.097404    3178 main.go:141] libmachine: STDOUT: 
	I0912 14:57:44.097419    3178 main.go:141] libmachine: STDERR: 
	I0912 14:57:44.097498    3178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0912 14:57:44.104640    3178 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:44.104657    3178 main.go:141] libmachine: STDERR: 
	I0912 14:57:44.104670    3178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:44.104702    3178 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:44.104753    3178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e6:14:b6:48:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/docker-flags-564000/disk.qcow2
	I0912 14:57:44.106276    3178 main.go:141] libmachine: STDOUT: 
	I0912 14:57:44.106289    3178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:44.106302    3178 client.go:171] LocalClient.Create took 223.48725ms
	I0912 14:57:46.108434    3178 start.go:128] duration metric: createHost completed in 2.283054541s
	I0912 14:57:46.108503    3178 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.283528833s
	W0912 14:57:46.108969    3178 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:46.117775    3178 out.go:177] 
	W0912 14:57:46.121906    3178 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:46.121930    3178 out.go:239] * 
	* 
	W0912 14:57:46.124730    3178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:57:46.131686    3178 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (75.540083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.824833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-09-12 14:57:46.269416 -0700 PDT m=+889.630019167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000: exit status 7 (28.533667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-564000
--- FAIL: TestDockerFlags (9.96s)

                                                
                                    
x
+
TestForceSystemdFlag (10.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-646000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
E0912 14:57:32.553009    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-646000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.668195667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-646000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-646000 in cluster force-systemd-flag-646000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:30.475895    3156 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:57:30.476012    3156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:30.476015    3156 out.go:309] Setting ErrFile to fd 2...
	I0912 14:57:30.476017    3156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:30.476145    3156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:57:30.477127    3156 out.go:303] Setting JSON to false
	I0912 14:57:30.492192    3156 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1624,"bootTime":1694554226,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:57:30.492282    3156 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:57:30.502094    3156 out.go:177] * [force-systemd-flag-646000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:57:30.509056    3156 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:57:30.505077    3156 notify.go:220] Checking for updates...
	I0912 14:57:30.515040    3156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:57:30.518080    3156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:57:30.521072    3156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:57:30.523988    3156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:57:30.527061    3156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:57:30.530368    3156 config.go:182] Loaded profile config "force-systemd-env-137000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:30.530439    3156 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:30.530476    3156 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:57:30.534006    3156 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:57:30.541101    3156 start.go:298] selected driver: qemu2
	I0912 14:57:30.541105    3156 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:57:30.541110    3156 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:57:30.543054    3156 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:57:30.546042    3156 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:57:30.549096    3156 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:57:30.549114    3156 cni.go:84] Creating CNI manager for ""
	I0912 14:57:30.549120    3156 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:57:30.549124    3156 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:57:30.549130    3156 start_flags.go:321] config:
	{Name:force-systemd-flag-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:57:30.553268    3156 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:57:30.556114    3156 out.go:177] * Starting control plane node force-systemd-flag-646000 in cluster force-systemd-flag-646000
	I0912 14:57:30.564042    3156 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:57:30.564060    3156 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:57:30.564070    3156 cache.go:57] Caching tarball of preloaded images
	I0912 14:57:30.564137    3156 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:57:30.564148    3156 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:57:30.564210    3156 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/force-systemd-flag-646000/config.json ...
	I0912 14:57:30.564227    3156 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/force-systemd-flag-646000/config.json: {Name:mke2b321e9db1836996f73c8efd6b11e6c25cc50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:57:30.564443    3156 start.go:365] acquiring machines lock for force-systemd-flag-646000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:30.564476    3156 start.go:369] acquired machines lock for "force-systemd-flag-646000" in 26.25µs
	I0912 14:57:30.564489    3156 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:30.564529    3156 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:30.573062    3156 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:30.590075    3156 start.go:159] libmachine.API.Create for "force-systemd-flag-646000" (driver="qemu2")
	I0912 14:57:30.590102    3156 client.go:168] LocalClient.Create starting
	I0912 14:57:30.590170    3156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:30.590198    3156 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:30.590211    3156 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:30.590254    3156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:30.590275    3156 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:30.590282    3156 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:30.590628    3156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:30.707060    3156 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:30.876667    3156 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:30.876676    3156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:30.876825    3156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:30.885494    3156 main.go:141] libmachine: STDOUT: 
	I0912 14:57:30.885511    3156 main.go:141] libmachine: STDERR: 
	I0912 14:57:30.885562    3156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2 +20000M
	I0912 14:57:30.892689    3156 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:30.892701    3156 main.go:141] libmachine: STDERR: 
	I0912 14:57:30.892713    3156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:30.892726    3156 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:30.892764    3156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a7:35:60:e1:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:30.894243    3156 main.go:141] libmachine: STDOUT: 
	I0912 14:57:30.894257    3156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:30.894274    3156 client.go:171] LocalClient.Create took 304.173125ms
	I0912 14:57:32.896411    3156 start.go:128] duration metric: createHost completed in 2.331905333s
	I0912 14:57:32.896466    3156 start.go:83] releasing machines lock for "force-systemd-flag-646000", held for 2.332026167s
	W0912 14:57:32.896530    3156 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:32.903854    3156 out.go:177] * Deleting "force-systemd-flag-646000" in qemu2 ...
	W0912 14:57:32.928053    3156 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:32.928078    3156 start.go:703] Will try again in 5 seconds ...
	I0912 14:57:37.930250    3156 start.go:365] acquiring machines lock for force-systemd-flag-646000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:38.787940    3156 start.go:369] acquired machines lock for "force-systemd-flag-646000" in 857.552166ms
	I0912 14:57:38.788128    3156 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:38.788399    3156 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:38.798917    3156 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:38.847901    3156 start.go:159] libmachine.API.Create for "force-systemd-flag-646000" (driver="qemu2")
	I0912 14:57:38.847942    3156 client.go:168] LocalClient.Create starting
	I0912 14:57:38.848112    3156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:38.848186    3156 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:38.848213    3156 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:38.848289    3156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:38.848331    3156 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:38.848348    3156 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:38.848949    3156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:38.971404    3156 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:39.058853    3156 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:39.058858    3156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:39.059003    3156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:39.067617    3156 main.go:141] libmachine: STDOUT: 
	I0912 14:57:39.067633    3156 main.go:141] libmachine: STDERR: 
	I0912 14:57:39.067685    3156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2 +20000M
	I0912 14:57:39.074753    3156 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:39.074767    3156 main.go:141] libmachine: STDERR: 
	I0912 14:57:39.074778    3156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:39.074792    3156 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:39.074835    3156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:9e:c5:36:92:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-flag-646000/disk.qcow2
	I0912 14:57:39.076384    3156 main.go:141] libmachine: STDOUT: 
	I0912 14:57:39.076410    3156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:39.076424    3156 client.go:171] LocalClient.Create took 228.480792ms
	I0912 14:57:41.078672    3156 start.go:128] duration metric: createHost completed in 2.290235625s
	I0912 14:57:41.078741    3156 start.go:83] releasing machines lock for "force-systemd-flag-646000", held for 2.290812166s
	W0912 14:57:41.079085    3156 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:41.087721    3156 out.go:177] 
	W0912 14:57:41.091705    3156 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:41.091754    3156 out.go:239] * 
	* 
	W0912 14:57:41.094514    3156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:57:41.103622    3156 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-646000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-646000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-646000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.041583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-646000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-646000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-12 14:57:41.1963 -0700 PDT m=+884.556802251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-646000 -n force-systemd-flag-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-646000 -n force-systemd-flag-646000: exit status 7 (35.045916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-646000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-646000
--- FAIL: TestForceSystemdFlag (10.87s)

                                                
                                    
x
+
TestForceSystemdEnv (9.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-137000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-137000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.759408s)

                                                
                                                
-- stdout --
	* [force-systemd-env-137000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-137000 in cluster force-systemd-env-137000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:57:26.501181    3123 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:57:26.501295    3123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:26.501298    3123 out.go:309] Setting ErrFile to fd 2...
	I0912 14:57:26.501301    3123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:57:26.501416    3123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:57:26.502492    3123 out.go:303] Setting JSON to false
	I0912 14:57:26.517771    3123 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1620,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:57:26.517842    3123 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:57:26.523400    3123 out.go:177] * [force-systemd-env-137000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:57:26.534340    3123 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:57:26.530439    3123 notify.go:220] Checking for updates...
	I0912 14:57:26.542297    3123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:57:26.549278    3123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:57:26.557344    3123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:57:26.565293    3123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:57:26.573311    3123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0912 14:57:26.577799    3123 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:57:26.577842    3123 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:57:26.582282    3123 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:57:26.589305    3123 start.go:298] selected driver: qemu2
	I0912 14:57:26.589309    3123 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:57:26.589314    3123 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:57:26.591370    3123 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:57:26.595316    3123 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:57:26.598483    3123 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:57:26.598504    3123 cni.go:84] Creating CNI manager for ""
	I0912 14:57:26.598522    3123 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:57:26.598527    3123 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:57:26.598539    3123 start_flags.go:321] config:
	{Name:force-systemd-env-137000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:57:26.602826    3123 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:57:26.611321    3123 out.go:177] * Starting control plane node force-systemd-env-137000 in cluster force-systemd-env-137000
	I0912 14:57:26.614320    3123 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:57:26.614337    3123 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:57:26.614350    3123 cache.go:57] Caching tarball of preloaded images
	I0912 14:57:26.614409    3123 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:57:26.614414    3123 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:57:26.614473    3123 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/force-systemd-env-137000/config.json ...
	I0912 14:57:26.614487    3123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/force-systemd-env-137000/config.json: {Name:mk4572f14d8e1965e6df196a16154a6b77c75077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:57:26.614723    3123 start.go:365] acquiring machines lock for force-systemd-env-137000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:26.614757    3123 start.go:369] acquired machines lock for "force-systemd-env-137000" in 23.792µs
	I0912 14:57:26.614768    3123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:26.614802    3123 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:26.623293    3123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:26.639082    3123 start.go:159] libmachine.API.Create for "force-systemd-env-137000" (driver="qemu2")
	I0912 14:57:26.639110    3123 client.go:168] LocalClient.Create starting
	I0912 14:57:26.639188    3123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:26.639217    3123 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:26.639229    3123 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:26.639274    3123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:26.639294    3123 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:26.639303    3123 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:26.639617    3123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:26.791430    3123 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:26.870841    3123 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:26.870851    3123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:26.871024    3123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:26.879867    3123 main.go:141] libmachine: STDOUT: 
	I0912 14:57:26.879884    3123 main.go:141] libmachine: STDERR: 
	I0912 14:57:26.879953    3123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2 +20000M
	I0912 14:57:26.887558    3123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:26.887572    3123 main.go:141] libmachine: STDERR: 
	I0912 14:57:26.887590    3123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:26.887596    3123 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:26.887642    3123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b1:55:c4:d5:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:26.889268    3123 main.go:141] libmachine: STDOUT: 
	I0912 14:57:26.889282    3123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:26.889302    3123 client.go:171] LocalClient.Create took 250.191209ms
	I0912 14:57:28.891472    3123 start.go:128] duration metric: createHost completed in 2.276682666s
	I0912 14:57:28.891544    3123 start.go:83] releasing machines lock for "force-systemd-env-137000", held for 2.276821416s
	W0912 14:57:28.891607    3123 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:28.898798    3123 out.go:177] * Deleting "force-systemd-env-137000" in qemu2 ...
	W0912 14:57:28.923342    3123 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:28.923365    3123 start.go:703] Will try again in 5 seconds ...
	I0912 14:57:33.925478    3123 start.go:365] acquiring machines lock for force-systemd-env-137000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:57:33.925871    3123 start.go:369] acquired machines lock for "force-systemd-env-137000" in 266.834µs
	I0912 14:57:33.925984    3123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:57:33.926183    3123 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:57:33.931802    3123 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0912 14:57:33.977921    3123 start.go:159] libmachine.API.Create for "force-systemd-env-137000" (driver="qemu2")
	I0912 14:57:33.978038    3123 client.go:168] LocalClient.Create starting
	I0912 14:57:33.978222    3123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:57:33.978313    3123 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:33.978338    3123 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:33.978440    3123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:57:33.978492    3123 main.go:141] libmachine: Decoding PEM data...
	I0912 14:57:33.978519    3123 main.go:141] libmachine: Parsing certificate...
	I0912 14:57:33.979261    3123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:57:34.106994    3123 main.go:141] libmachine: Creating SSH key...
	I0912 14:57:34.176149    3123 main.go:141] libmachine: Creating Disk image...
	I0912 14:57:34.176154    3123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:57:34.176294    3123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:34.184837    3123 main.go:141] libmachine: STDOUT: 
	I0912 14:57:34.184851    3123 main.go:141] libmachine: STDERR: 
	I0912 14:57:34.184902    3123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2 +20000M
	I0912 14:57:34.192084    3123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:57:34.192095    3123 main.go:141] libmachine: STDERR: 
	I0912 14:57:34.192110    3123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:34.192116    3123 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:57:34.192159    3123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:66:85:96:5c:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/force-systemd-env-137000/disk.qcow2
	I0912 14:57:34.193693    3123 main.go:141] libmachine: STDOUT: 
	I0912 14:57:34.193705    3123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:57:34.193718    3123 client.go:171] LocalClient.Create took 215.678041ms
	I0912 14:57:36.195900    3123 start.go:128] duration metric: createHost completed in 2.269733833s
	I0912 14:57:36.195994    3123 start.go:83] releasing machines lock for "force-systemd-env-137000", held for 2.27011325s
	W0912 14:57:36.196352    3123 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:57:36.203985    3123 out.go:177] 
	W0912 14:57:36.208058    3123 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:57:36.208112    3123 out.go:239] * 
	* 
	W0912 14:57:36.210865    3123 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:57:36.218938    3123 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-137000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-137000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-137000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.85125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-137000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-137000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-12 14:57:36.312729 -0700 PDT m=+879.673133334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-137000 -n force-systemd-env-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-137000 -n force-systemd-env-137000: exit status 7 (34.405292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-137000
--- FAIL: TestForceSystemdEnv (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-737000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-737000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hvztj" [72d66552-3d17-4a78-8b4b-2477c7f0d129] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hvztj" [72d66552-3d17-4a78-8b4b-2477c7f0d129] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008980291s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32422
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32422: Get "http://192.168.105.4:32422": dial tcp 192.168.105.4:32422: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-737000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-hvztj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-737000/192.168.105.4
Start Time:       Tue, 12 Sep 2023 14:48:16 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 12 Sep 2023 14:48:32 -0700
Finished:     Tue, 12 Sep 2023 14:48:32 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xjmm4 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-xjmm4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  34s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-hvztj to functional-737000
Normal   Pulled     19s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 34s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x3 over 32s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-hvztj_default(72d66552-3d17-4a78-8b4b-2477c7f0d129)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-737000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-737000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.6.59
IPs:                      10.108.6.59
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32422/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-737000 -n functional-737000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh sudo                                                                                           | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2129355538/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh -- ls                                                                                          | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh sudo                                                                                           | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-737000 --dry-run                                                                                       | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|           | -p functional-737000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 14:48:50
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:48:50.747697    2175 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:48:50.747801    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.747804    2175 out.go:309] Setting ErrFile to fd 2...
	I0912 14:48:50.747806    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.747923    2175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:48:50.749318    2175 out.go:303] Setting JSON to false
	I0912 14:48:50.765366    2175 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1104,"bootTime":1694554226,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:48:50.765485    2175 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:48:50.769267    2175 out.go:177] * [functional-737000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0912 14:48:50.774208    2175 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:48:50.778201    2175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:48:50.774243    2175 notify.go:220] Checking for updates...
	I0912 14:48:50.779293    2175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:48:50.782173    2175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:48:50.785223    2175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:48:50.788213    2175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:48:50.791514    2175 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:48:50.791801    2175 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:48:50.796227    2175 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0912 14:48:50.805225    2175 start.go:298] selected driver: qemu2
	I0912 14:48:50.805234    2175 start.go:902] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:48:50.805296    2175 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:48:50.812215    2175 out.go:177] 
	W0912 14:48:50.816187    2175 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 14:48:50.819194    2175 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-12 21:45:22 UTC, ends at Tue 2023-09-12 21:48:51 UTC. --
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.753384513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.753394888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.753630554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.790377165Z" level=info msg="shim disconnected" id=cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4 namespace=moby
	Sep 12 21:48:32 functional-737000 dockerd[7197]: time="2023-09-12T21:48:32.790427957Z" level=info msg="ignoring event" container=cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.790809498Z" level=warning msg="cleaning up after shim disconnected" id=cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4 namespace=moby
	Sep 12 21:48:32 functional-737000 dockerd[7203]: time="2023-09-12T21:48:32.790819289Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.751305933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.751339017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.751356183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.751362475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:43 functional-737000 dockerd[7197]: time="2023-09-12T21:48:43.793302690Z" level=info msg="ignoring event" container=7534838b79542902883e217a4be3ba6abe8215a49b185c57251f4cca99cb6091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.793386982Z" level=info msg="shim disconnected" id=7534838b79542902883e217a4be3ba6abe8215a49b185c57251f4cca99cb6091 namespace=moby
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.793411148Z" level=warning msg="cleaning up after shim disconnected" id=7534838b79542902883e217a4be3ba6abe8215a49b185c57251f4cca99cb6091 namespace=moby
	Sep 12 21:48:43 functional-737000 dockerd[7203]: time="2023-09-12T21:48:43.793415190Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.697174260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.697205802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.697218718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.697225093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.699865598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.699896639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.699924264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:48:51 functional-737000 dockerd[7203]: time="2023-09-12T21:48:51.699929223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:48:51 functional-737000 cri-dockerd[7459]: time="2023-09-12T21:48:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed332ba3e28336520f0725e2ec1f46af01802cfefcc56d81bfab623847850242/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 12 21:48:51 functional-737000 cri-dockerd[7459]: time="2023-09-12T21:48:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa9677e2e8ed7ac785574d799c1009929a81b71892a44b14389ee0434a52a594/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID
	7534838b79542       72565bf5bbedf                                                                   8 seconds ago        Exited              echoserver-arm            3                   e2e47f7c9f45a
	cc4adc1fd74d6       72565bf5bbedf                                                                   19 seconds ago       Exited              echoserver-arm            2                   e0d8099338f70
	7f5de0d198443       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153   23 seconds ago       Running             myfrontend                0                   f530fb695d000
	ad4af929848c1       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70   41 seconds ago       Running             nginx                     0                   286104c91e767
	8ab29eff4f8aa       812f5241df7fd                                                                   About a minute ago   Running             kube-proxy                2                   1cf97af81c104
	1afd0165606e0       97e04611ad434                                                                   About a minute ago   Running             coredns                   2                   3eadf689a77a5
	12329cc80dca8       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       2                   109946c01ff7b
	afcfa639becec       b4a5a57e99492                                                                   About a minute ago   Running             kube-scheduler            2                   7d8712765e641
	d326e6aa5e690       9cdd6470f48c8                                                                   About a minute ago   Running             etcd                      2                   59988f41fe5c8
	3519836c8009d       b29fb62480892                                                                   About a minute ago   Running             kube-apiserver            0                   11e71e528a092
	37ca0240de447       8b6e1980b7584                                                                   About a minute ago   Running             kube-controller-manager   2                   2fa8c6d8ac69a
	15c8e2eeedbc4       ba04bb24b9575                                                                   2 minutes ago        Exited              storage-provisioner       1                   c4d251d8041e2
	0f9cf23de5bac       812f5241df7fd                                                                   2 minutes ago        Exited              kube-proxy                1                   4b29e6fe113ed
	9eea01fb25229       97e04611ad434                                                                   2 minutes ago        Exited              coredns                   1                   9b371282aa34d
	d3e4def4d5fff       8b6e1980b7584                                                                   2 minutes ago        Exited              kube-controller-manager   1                   01c1788194178
	ea7ae0760c74f       b4a5a57e99492                                                                   2 minutes ago        Exited              kube-scheduler            1                   a07d329f7a639
	796837eaea519       9cdd6470f48c8                                                                   2 minutes ago        Exited              etcd                      1                   b2c65e5a82b53
	
	* 
	* ==> coredns [1afd0165606e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49167 - 58060 "HINFO IN 7818124071425246846.7399759390856449322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004185328s
	[INFO] 10.244.0.1:38755 - 60309 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000090416s
	[INFO] 10.244.0.1:57450 - 23565 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000098874s
	[INFO] 10.244.0.1:50131 - 34434 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.0009197s
	[INFO] 10.244.0.1:20525 - 41806 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000062s
	[INFO] 10.244.0.1:16876 - 61453 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000065874s
	[INFO] 10.244.0.1:60279 - 34911 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000132249s
	
	* 
	* ==> coredns [9eea01fb2522] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42612 - 27518 "HINFO IN 2252453695656298071.3285822342135627134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004689589s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-737000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-737000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=functional-737000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T14_45_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 21:45:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-737000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 21:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 21:48:35 +0000   Tue, 12 Sep 2023 21:45:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 21:48:35 +0000   Tue, 12 Sep 2023 21:45:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 21:48:35 +0000   Tue, 12 Sep 2023 21:45:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 21:48:35 +0000   Tue, 12 Sep 2023 21:45:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-737000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6fb968f79d44393924f9941cd0b5845
	  System UUID:                f6fb968f79d44393924f9941cd0b5845
	  Boot ID:                    3291ac84-84dc-4b95-9381-ea80d44dd81d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-kg9rn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  default                     hello-node-connect-7799dfb7c6-hvztj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 coredns-5dd5756b68-br5pg                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m
	  kube-system                 etcd-functional-737000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-apiserver-functional-737000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-functional-737000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-proxy-h7hmg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-scheduler-functional-737000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-7l5jd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5k7l7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m58s              kube-proxy       
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 118s               kube-proxy       
	  Normal   NodeAllocatableEnforced  3m13s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m13s              kubelet          Node functional-737000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m13s              kubelet          Node functional-737000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m13s              kubelet          Node functional-737000 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m13s              kubelet          Starting kubelet.
	  Normal   NodeReady                3m10s              kubelet          Node functional-737000 status is now: NodeReady
	  Normal   RegisteredNode           3m                 node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	  Warning  ContainerGCFailed        2m13s              kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeNotReady             2m12s              kubelet          Node functional-737000 status is now: NodeNotReady
	  Normal   RegisteredNode           107s               node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node functional-737000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node functional-737000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node functional-737000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           66s                node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	
	* 
	* ==> dmesg <==
	* [ +31.750475] systemd-fstab-generator[4296]: Ignoring "noauto" for root device
	[  +0.148883] systemd-fstab-generator[4329]: Ignoring "noauto" for root device
	[  +0.090309] systemd-fstab-generator[4340]: Ignoring "noauto" for root device
	[  +0.100996] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[ +11.436360] systemd-fstab-generator[4902]: Ignoring "noauto" for root device
	[  +0.065239] systemd-fstab-generator[4913]: Ignoring "noauto" for root device
	[  +0.066636] systemd-fstab-generator[4924]: Ignoring "noauto" for root device
	[  +0.063102] systemd-fstab-generator[4935]: Ignoring "noauto" for root device
	[  +0.096681] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
	[  +6.178178] kauditd_printk_skb: 34 callbacks suppressed
	[Sep12 21:47] systemd-fstab-generator[6722]: Ignoring "noauto" for root device
	[  +0.116274] systemd-fstab-generator[6770]: Ignoring "noauto" for root device
	[  +0.104941] systemd-fstab-generator[6781]: Ignoring "noauto" for root device
	[  +0.091701] systemd-fstab-generator[6794]: Ignoring "noauto" for root device
	[ +11.434907] systemd-fstab-generator[7348]: Ignoring "noauto" for root device
	[  +0.071054] systemd-fstab-generator[7359]: Ignoring "noauto" for root device
	[  +0.064847] systemd-fstab-generator[7370]: Ignoring "noauto" for root device
	[  +0.084570] systemd-fstab-generator[7381]: Ignoring "noauto" for root device
	[  +0.086397] systemd-fstab-generator[7452]: Ignoring "noauto" for root device
	[  +0.869978] systemd-fstab-generator[7703]: Ignoring "noauto" for root device
	[  +4.666294] kauditd_printk_skb: 29 callbacks suppressed
	[ +21.948399] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.494748] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep12 21:48] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.798525] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [796837eaea51] <==
	* {"level":"info","ts":"2023-09-12T21:46:51.832387Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:46:53.025583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-12T21:46:53.025648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-12T21:46:53.025702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-12T21:46:53.02572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-12T21:46:53.025726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-12T21:46:53.025737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-12T21:46:53.025745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-12T21:46:53.026915Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-737000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T21:46:53.026927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:46:53.027859Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-12T21:46:53.027037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:46:53.027996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T21:46:53.028051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T21:46:53.028616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T21:47:18.07432Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-12T21:47:18.074369Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-737000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-12T21:47:18.074415Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T21:47:18.074453Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T21:47:18.081812Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T21:47:18.081834Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-12T21:47:18.081854Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-12T21:47:18.083632Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-12T21:47:18.083669Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-12T21:47:18.083672Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-737000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [d326e6aa5e69] <==
	* {"level":"info","ts":"2023-09-12T21:47:31.637649Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T21:47:31.637653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T21:47:31.637754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-12T21:47:31.637779Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-12T21:47:31.637816Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:47:31.637841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:47:31.640371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-12T21:47:31.640463Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-12T21:47:31.640467Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-12T21:47:31.640819Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T21:47:31.640833Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T21:47:33.432961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-12T21:47:33.433106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-12T21:47:33.433147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-12T21:47:33.433187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-12T21:47:33.433202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-12T21:47:33.433227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-12T21:47:33.433254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-12T21:47:33.438544Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-737000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T21:47:33.438755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:47:33.439227Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T21:47:33.4394Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T21:47:33.439639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:47:33.44099Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T21:47:33.441686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	* 
	* ==> kernel <==
	*  21:48:52 up 3 min,  0 users,  load average: 0.39, 0.19, 0.07
	Linux functional-737000 5.10.57 #1 SMP PREEMPT Mon Sep 11 23:30:27 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3519836c8009] <==
	* I0912 21:47:34.105894       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 21:47:34.105912       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 21:47:34.105949       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 21:47:34.106269       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 21:47:34.106496       1 aggregator.go:166] initial CRD sync complete...
	I0912 21:47:34.106510       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 21:47:34.106516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 21:47:34.106522       1 cache.go:39] Caches are synced for autoregister controller
	E0912 21:47:34.108133       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0912 21:47:35.004877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 21:47:35.630863       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 21:47:35.633917       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 21:47:35.648288       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 21:47:35.655911       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 21:47:35.658164       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 21:47:46.744820       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:47:46.834751       1 controller.go:624] quota admission added evaluator for: endpoints
	I0912 21:47:51.418022       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.62.177"}
	I0912 21:47:56.735481       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0912 21:47:56.779913       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.121.78"}
	I0912 21:48:07.482440       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.63.18"}
	I0912 21:48:16.919086       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.6.59"}
	I0912 21:48:51.268936       1 controller.go:624] quota admission added evaluator for: namespaces
	I0912 21:48:51.349883       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.113.143"}
	I0912 21:48:51.367332       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.194.31"}
	
	* 
	* ==> kube-controller-manager [37ca0240de44] <==
	* I0912 21:48:51.310584       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0912 21:48:51.310923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.856132ms"
	E0912 21:48:51.310972       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0912 21:48:51.310993       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0912 21:48:51.316336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.109014ms"
	E0912 21:48:51.316376       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0912 21:48:51.320086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="6.34026ms"
	E0912 21:48:51.320116       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0912 21:48:51.320093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.69509ms"
	E0912 21:48:51.320257       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0912 21:48:51.320273       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0912 21:48:51.320298       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0912 21:48:51.324134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.540629ms"
	E0912 21:48:51.324146       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0912 21:48:51.324253       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0912 21:48:51.334688       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-5k7l7"
	I0912 21:48:51.336701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.110424ms"
	I0912 21:48:51.341735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.964967ms"
	I0912 21:48:51.341770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.167µs"
	I0912 21:48:51.349450       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-7l5jd"
	I0912 21:48:51.355310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.292µs"
	I0912 21:48:51.359449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="18.341072ms"
	I0912 21:48:51.367503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.920513ms"
	I0912 21:48:51.369681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="14.708µs"
	I0912 21:48:51.386292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="28.5µs"
	
	* 
	* ==> kube-controller-manager [d3e4def4d5ff] <==
	* I0912 21:47:05.891981       1 shared_informer.go:318] Caches are synced for persistent volume
	I0912 21:47:05.894606       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0912 21:47:05.895695       1 shared_informer.go:318] Caches are synced for disruption
	I0912 21:47:05.902474       1 shared_informer.go:318] Caches are synced for HPA
	I0912 21:47:05.914606       1 shared_informer.go:318] Caches are synced for job
	I0912 21:47:05.914639       1 shared_informer.go:318] Caches are synced for ephemeral
	I0912 21:47:05.915699       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0912 21:47:05.915773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.918µs"
	I0912 21:47:05.916826       1 shared_informer.go:318] Caches are synced for attach detach
	I0912 21:47:05.917969       1 shared_informer.go:318] Caches are synced for endpoint
	I0912 21:47:05.921195       1 shared_informer.go:318] Caches are synced for taint
	I0912 21:47:05.921246       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0912 21:47:05.921303       1 taint_manager.go:211] "Sending events to api server"
	I0912 21:47:05.921583       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0912 21:47:05.921705       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-737000"
	I0912 21:47:05.921763       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0912 21:47:05.921742       1 event.go:307] "Event occurred" object="functional-737000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-737000 event: Registered Node functional-737000 in Controller"
	I0912 21:47:05.922656       1 shared_informer.go:318] Caches are synced for GC
	I0912 21:47:05.928025       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 21:47:05.937303       1 shared_informer.go:318] Caches are synced for deployment
	I0912 21:47:05.940486       1 shared_informer.go:318] Caches are synced for PVC protection
	I0912 21:47:05.941570       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 21:47:06.256724       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 21:47:06.336906       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 21:47:06.336920       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0f9cf23de5ba] <==
	* I0912 21:46:52.411591       1 server_others.go:69] "Using iptables proxy"
	I0912 21:46:53.631880       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0912 21:46:53.691830       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 21:46:53.691843       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:46:53.692506       1 server_others.go:152] "Using iptables Proxier"
	I0912 21:46:53.692550       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 21:46:53.692634       1 server.go:846] "Version info" version="v1.28.1"
	I0912 21:46:53.692724       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:46:53.693026       1 config.go:188] "Starting service config controller"
	I0912 21:46:53.693058       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 21:46:53.693078       1 config.go:97] "Starting endpoint slice config controller"
	I0912 21:46:53.693094       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 21:46:53.695931       1 config.go:315] "Starting node config controller"
	I0912 21:46:53.696502       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 21:46:53.794200       1 shared_informer.go:318] Caches are synced for service config
	I0912 21:46:53.794199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 21:46:53.796698       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [8ab29eff4f8a] <==
	* I0912 21:47:35.321884       1 server_others.go:69] "Using iptables proxy"
	I0912 21:47:35.326463       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0912 21:47:35.334795       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 21:47:35.334809       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 21:47:35.335452       1 server_others.go:152] "Using iptables Proxier"
	I0912 21:47:35.335469       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 21:47:35.335536       1 server.go:846] "Version info" version="v1.28.1"
	I0912 21:47:35.335544       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:47:35.336473       1 config.go:188] "Starting service config controller"
	I0912 21:47:35.336485       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 21:47:35.336521       1 config.go:97] "Starting endpoint slice config controller"
	I0912 21:47:35.336526       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 21:47:35.336896       1 config.go:315] "Starting node config controller"
	I0912 21:47:35.336920       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 21:47:35.437565       1 shared_informer.go:318] Caches are synced for node config
	I0912 21:47:35.437565       1 shared_informer.go:318] Caches are synced for service config
	I0912 21:47:35.437602       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [afcfa639bece] <==
	* I0912 21:47:32.109427       1 serving.go:348] Generated self-signed cert in-memory
	W0912 21:47:34.034097       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 21:47:34.034112       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:47:34.034116       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:47:34.034119       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:47:34.062010       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0912 21:47:34.062109       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:47:34.062796       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 21:47:34.062822       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:47:34.063719       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 21:47:34.063766       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0912 21:47:34.163506       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ea7ae0760c74] <==
	* I0912 21:46:52.613444       1 serving.go:348] Generated self-signed cert in-memory
	W0912 21:46:53.576299       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 21:46:53.576316       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:46:53.576321       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:46:53.576324       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:46:53.623936       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0912 21:46:53.624046       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:46:53.624896       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 21:46:53.625010       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 21:46:53.625026       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:46:53.625039       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0912 21:46:53.725823       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:47:18.080347       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0912 21:47:18.080484       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0912 21:47:18.080569       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 21:45:22 UTC, ends at Tue 2023-09-12 21:48:52 UTC. --
	Sep 12 21:48:28 functional-737000 kubelet[7709]: I0912 21:48:28.730669    7709 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1efcd0cc-725f-41e2-b9b5-e761ba50022b" path="/var/lib/kubelet/pods/1efcd0cc-725f-41e2-b9b5-e761ba50022b/volumes"
	Sep 12 21:48:30 functional-737000 kubelet[7709]: I0912 21:48:30.727376    7709 scope.go:117] "RemoveContainer" containerID="ca31c82b0a98eb3ab91979907d344867dda777799170e5f9961c8d13d38e53f5"
	Sep 12 21:48:30 functional-737000 kubelet[7709]: E0912 21:48:30.727641    7709 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-kg9rn_default(e7dada1b-a3db-48b5-b371-7f0d30ea3ffe)\"" pod="default/hello-node-759d89bdcc-kg9rn" podUID="e7dada1b-a3db-48b5-b371-7f0d30ea3ffe"
	Sep 12 21:48:30 functional-737000 kubelet[7709]: I0912 21:48:30.732939    7709 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.992929136 podCreationTimestamp="2023-09-12 21:48:27 +0000 UTC" firstStartedPulling="2023-09-12 21:48:27.708434632 +0000 UTC m=+57.058900335" lastFinishedPulling="2023-09-12 21:48:28.448421342 +0000 UTC m=+57.798887044" observedRunningTime="2023-09-12 21:48:29.150489185 +0000 UTC m=+58.500954846" watchObservedRunningTime="2023-09-12 21:48:30.732915845 +0000 UTC m=+60.083381547"
	Sep 12 21:48:30 functional-737000 kubelet[7709]: E0912 21:48:30.742656    7709 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 12 21:48:30 functional-737000 kubelet[7709]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 12 21:48:30 functional-737000 kubelet[7709]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 12 21:48:30 functional-737000 kubelet[7709]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 12 21:48:30 functional-737000 kubelet[7709]: I0912 21:48:30.795860    7709 scope.go:117] "RemoveContainer" containerID="5f086a984116a5b47eaae16b45538f0ef53496a36b9d914edbd93883b721eac6"
	Sep 12 21:48:32 functional-737000 kubelet[7709]: I0912 21:48:32.727834    7709 scope.go:117] "RemoveContainer" containerID="dcf90849806e4017ab55bf2601336b9868031ce1826895acbe2986271137133c"
	Sep 12 21:48:33 functional-737000 kubelet[7709]: I0912 21:48:33.169021    7709 scope.go:117] "RemoveContainer" containerID="cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4"
	Sep 12 21:48:33 functional-737000 kubelet[7709]: E0912 21:48:33.169128    7709 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-hvztj_default(72d66552-3d17-4a78-8b4b-2477c7f0d129)\"" pod="default/hello-node-connect-7799dfb7c6-hvztj" podUID="72d66552-3d17-4a78-8b4b-2477c7f0d129"
	Sep 12 21:48:33 functional-737000 kubelet[7709]: I0912 21:48:33.169339    7709 scope.go:117] "RemoveContainer" containerID="dcf90849806e4017ab55bf2601336b9868031ce1826895acbe2986271137133c"
	Sep 12 21:48:43 functional-737000 kubelet[7709]: I0912 21:48:43.728522    7709 scope.go:117] "RemoveContainer" containerID="ca31c82b0a98eb3ab91979907d344867dda777799170e5f9961c8d13d38e53f5"
	Sep 12 21:48:44 functional-737000 kubelet[7709]: I0912 21:48:44.229050    7709 scope.go:117] "RemoveContainer" containerID="ca31c82b0a98eb3ab91979907d344867dda777799170e5f9961c8d13d38e53f5"
	Sep 12 21:48:44 functional-737000 kubelet[7709]: I0912 21:48:44.229235    7709 scope.go:117] "RemoveContainer" containerID="7534838b79542902883e217a4be3ba6abe8215a49b185c57251f4cca99cb6091"
	Sep 12 21:48:44 functional-737000 kubelet[7709]: E0912 21:48:44.229322    7709 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-kg9rn_default(e7dada1b-a3db-48b5-b371-7f0d30ea3ffe)\"" pod="default/hello-node-759d89bdcc-kg9rn" podUID="e7dada1b-a3db-48b5-b371-7f0d30ea3ffe"
	Sep 12 21:48:45 functional-737000 kubelet[7709]: I0912 21:48:45.729067    7709 scope.go:117] "RemoveContainer" containerID="cc4adc1fd74d656ada5e204e35b8d2ab7ae49b9811b1bed4f7e4b4d63b317fd4"
	Sep 12 21:48:45 functional-737000 kubelet[7709]: E0912 21:48:45.729381    7709 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-hvztj_default(72d66552-3d17-4a78-8b4b-2477c7f0d129)\"" pod="default/hello-node-connect-7799dfb7c6-hvztj" podUID="72d66552-3d17-4a78-8b4b-2477c7f0d129"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.340319    7709 topology_manager.go:215] "Topology Admit Handler" podUID="1008d945-7c4c-4976-85cc-2ec930e39243" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-5k7l7"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.354528    7709 topology_manager.go:215] "Topology Admit Handler" podUID="23c10e97-7204-43fe-a2fe-ab3cd94fc05c" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-7l5jd"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.378322    7709 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/23c10e97-7204-43fe-a2fe-ab3cd94fc05c-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-7l5jd\" (UID: \"23c10e97-7204-43fe-a2fe-ab3cd94fc05c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-7l5jd"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.378466    7709 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1008d945-7c4c-4976-85cc-2ec930e39243-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5k7l7\" (UID: \"1008d945-7c4c-4976-85cc-2ec930e39243\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5k7l7"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.378483    7709 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrgd\" (UniqueName: \"kubernetes.io/projected/1008d945-7c4c-4976-85cc-2ec930e39243-kube-api-access-7hrgd\") pod \"kubernetes-dashboard-8694d4445c-5k7l7\" (UID: \"1008d945-7c4c-4976-85cc-2ec930e39243\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5k7l7"
	Sep 12 21:48:51 functional-737000 kubelet[7709]: I0912 21:48:51.378494    7709 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npcp9\" (UniqueName: \"kubernetes.io/projected/23c10e97-7204-43fe-a2fe-ab3cd94fc05c-kube-api-access-npcp9\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-7l5jd\" (UID: \"23c10e97-7204-43fe-a2fe-ab3cd94fc05c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-7l5jd"
	
	* 
	* ==> storage-provisioner [12329cc80dca] <==
	* I0912 21:47:35.300406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:47:35.309904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:47:35.310473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:47:52.698416       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:47:52.698484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-737000_d3df8c3c-c733-46ba-9e98-49b4c419febd!
	I0912 21:47:52.698841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"639ce852-10e2-4809-b662-27867e7bfa95", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-737000_d3df8c3c-c733-46ba-9e98-49b4c419febd became leader
	I0912 21:47:52.799738       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-737000_d3df8c3c-c733-46ba-9e98-49b4c419febd!
	I0912 21:48:15.447558       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0912 21:48:15.447978       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0ff63367-6798-4351-ae30-baffa17cf31a", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0912 21:48:15.447630       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    86b0fae4-202a-4734-80e2-0327ee2763e2 358 0 2023-09-12 21:45:54 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-12 21:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-0ff63367-6798-4351-ae30-baffa17cf31a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  0ff63367-6798-4351-ae30-baffa17cf31a 714 0 2023-09-12 21:48:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-12 21:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-12 21:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0912 21:48:15.448312       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-0ff63367-6798-4351-ae30-baffa17cf31a" provisioned
	I0912 21:48:15.448345       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0912 21:48:15.448362       1 volume_store.go:212] Trying to save persistentvolume "pvc-0ff63367-6798-4351-ae30-baffa17cf31a"
	I0912 21:48:15.453941       1 volume_store.go:219] persistentvolume "pvc-0ff63367-6798-4351-ae30-baffa17cf31a" saved
	I0912 21:48:15.455846       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0ff63367-6798-4351-ae30-baffa17cf31a", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0ff63367-6798-4351-ae30-baffa17cf31a
	
	* 
	* ==> storage-provisioner [15c8e2eeedbc] <==
	* I0912 21:46:52.459115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:46:53.632575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:46:53.632735       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:47:11.026482       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:47:11.026555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-737000_465bcd04-af5b-416e-b4a7-91e5fb622b32!
	I0912 21:47:11.027026       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"639ce852-10e2-4809-b662-27867e7bfa95", APIVersion:"v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-737000_465bcd04-af5b-416e-b4a7-91e5fb622b32 became leader
	I0912 21:47:11.127123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-737000_465bcd04-af5b-416e-b4a7-91e5fb622b32!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-737000 -n functional-737000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-737000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: dashboard-metrics-scraper-7fd5cb4ddc-7l5jd kubernetes-dashboard-8694d4445c-5k7l7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-737000 describe pod dashboard-metrics-scraper-7fd5cb4ddc-7l5jd kubernetes-dashboard-8694d4445c-5k7l7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-737000 describe pod dashboard-metrics-scraper-7fd5cb4ddc-7l5jd kubernetes-dashboard-8694d4445c-5k7l7: exit status 1 (36.096084ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-7l5jd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-5k7l7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-737000 describe pod dashboard-metrics-scraper-7fd5cb4ddc-7l5jd kubernetes-dashboard-8694d4445c-5k7l7: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0912 14:48:07.200795    2024 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:07.201075    2024 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:07.201079    2024 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:07.201081    2024 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:07.201211    2024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:07.201436    2024 mustload.go:65] Loading cluster: functional-737000
I0912 14:48:07.201648    2024 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:07.206155    2024 out.go:177] 
W0912 14:48:07.209232    2024 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/monitor: connect: connection refused
W0912 14:48:07.209238    2024 out.go:239] * 
* 
W0912 14:48:07.210660    2024 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0912 14:48:07.213150    2024 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2023: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-477000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-477000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in aec94834e107
	Removing intermediate container aec94834e107
	 ---> 35e921e855c9
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 8fc2a20327f2
	Removing intermediate container 8fc2a20327f2
	 ---> c81457e2b3e7
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 927c5d5ddee0
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-477000 -n image-477000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-477000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-737000 ssh findmnt                                                                                      | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-737000 ssh findmnt                                                                                      | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-737000 ssh findmnt                                                                                      | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-737000 ssh findmnt                                                                                      | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start          | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-737000 --dry-run                                                                                     | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | -p functional-737000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-737000 ssh pgrep                                                                                        | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-737000 image build -t                                                                                   | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | localhost/my-image:functional-737000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-737000 image ls                                                                                         | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	| image          | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-737000                                                                                                  | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| delete         | -p functional-737000                                                                                               | functional-737000 | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	| start          | -p image-477000 --driver=qemu2                                                                                     | image-477000      | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                |                                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-477000      | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | ./testdata/image-build/test-normal                                                                                 |                   |         |         |                     |                     |
	|                | -p image-477000                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-477000      | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                                                                           |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                                                                               |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                                                                                 |                   |         |         |                     |                     |
	|                | image-477000                                                                                                       |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 14:49:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:49:00.139322    2230 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:49:00.139432    2230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:49:00.139433    2230 out.go:309] Setting ErrFile to fd 2...
	I0912 14:49:00.139436    2230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:49:00.139591    2230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:49:00.140622    2230 out.go:303] Setting JSON to false
	I0912 14:49:00.156775    2230 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1114,"bootTime":1694554226,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:49:00.156835    2230 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:49:00.160548    2230 out.go:177] * [image-477000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:49:00.166524    2230 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:49:00.166548    2230 notify.go:220] Checking for updates...
	I0912 14:49:00.173526    2230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:49:00.176452    2230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:49:00.179550    2230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:49:00.182568    2230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:49:00.185513    2230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:49:00.188714    2230 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:49:00.192452    2230 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:49:00.199503    2230 start.go:298] selected driver: qemu2
	I0912 14:49:00.199505    2230 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:49:00.199511    2230 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:49:00.199579    2230 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:49:00.202524    2230 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:49:00.205785    2230 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:49:00.205883    2230 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:49:00.205897    2230 cni.go:84] Creating CNI manager for ""
	I0912 14:49:00.205905    2230 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:49:00.205909    2230 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:49:00.205914    2230 start_flags.go:321] config:
	{Name:image-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:49:00.210336    2230 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:49:00.217560    2230 out.go:177] * Starting control plane node image-477000 in cluster image-477000
	I0912 14:49:00.221445    2230 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:49:00.221462    2230 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:49:00.221470    2230 cache.go:57] Caching tarball of preloaded images
	I0912 14:49:00.221534    2230 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:49:00.221538    2230 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:49:00.221760    2230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/config.json ...
	I0912 14:49:00.221772    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/config.json: {Name:mk06a6fbdf2c2577b65799f2206a348ad67067d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:00.221987    2230 start.go:365] acquiring machines lock for image-477000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:49:00.222022    2230 start.go:369] acquired machines lock for "image-477000" in 30.959µs
	I0912 14:49:00.222031    2230 start.go:93] Provisioning new machine with config: &{Name:image-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:49:00.222060    2230 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:49:00.230466    2230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 14:49:00.252578    2230 start.go:159] libmachine.API.Create for "image-477000" (driver="qemu2")
	I0912 14:49:00.252598    2230 client.go:168] LocalClient.Create starting
	I0912 14:49:00.252668    2230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:49:00.252691    2230 main.go:141] libmachine: Decoding PEM data...
	I0912 14:49:00.252702    2230 main.go:141] libmachine: Parsing certificate...
	I0912 14:49:00.252741    2230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:49:00.252758    2230 main.go:141] libmachine: Decoding PEM data...
	I0912 14:49:00.252766    2230 main.go:141] libmachine: Parsing certificate...
	I0912 14:49:00.253089    2230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:49:00.369882    2230 main.go:141] libmachine: Creating SSH key...
	I0912 14:49:00.469713    2230 main.go:141] libmachine: Creating Disk image...
	I0912 14:49:00.469716    2230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:49:00.469853    2230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2
	I0912 14:49:00.478906    2230 main.go:141] libmachine: STDOUT: 
	I0912 14:49:00.478919    2230 main.go:141] libmachine: STDERR: 
	I0912 14:49:00.478975    2230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2 +20000M
	I0912 14:49:00.486247    2230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:49:00.486263    2230 main.go:141] libmachine: STDERR: 
	I0912 14:49:00.486280    2230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2
	I0912 14:49:00.486285    2230 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:49:00.486316    2230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:77:fd:2f:2f:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/disk.qcow2
	I0912 14:49:00.521860    2230 main.go:141] libmachine: STDOUT: 
	I0912 14:49:00.521876    2230 main.go:141] libmachine: STDERR: 
	I0912 14:49:00.521879    2230 main.go:141] libmachine: Attempt 0
	I0912 14:49:00.521891    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:00.521969    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:00.521986    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:00.521992    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:00.521999    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:02.524126    2230 main.go:141] libmachine: Attempt 1
	I0912 14:49:02.524226    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:02.524511    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:02.524558    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:02.524585    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:02.524615    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:04.526755    2230 main.go:141] libmachine: Attempt 2
	I0912 14:49:04.526767    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:04.526885    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:04.526896    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:04.526901    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:04.526905    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:06.528973    2230 main.go:141] libmachine: Attempt 3
	I0912 14:49:06.529020    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:06.529104    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:06.529114    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:06.529118    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:06.529122    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:08.531122    2230 main.go:141] libmachine: Attempt 4
	I0912 14:49:08.531126    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:08.531155    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:08.531160    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:08.531164    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:08.531168    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:10.533209    2230 main.go:141] libmachine: Attempt 5
	I0912 14:49:10.533218    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:10.533311    2230 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0912 14:49:10.533320    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:10.533324    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:10.533328    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:12.535372    2230 main.go:141] libmachine: Attempt 6
	I0912 14:49:12.535407    2230 main.go:141] libmachine: Searching for 52:77:fd:2f:2f:31 in /var/db/dhcpd_leases ...
	I0912 14:49:12.535540    2230 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:12.535550    2230 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:12.535553    2230 main.go:141] libmachine: Found match: 52:77:fd:2f:2f:31
	I0912 14:49:12.535566    2230 main.go:141] libmachine: IP: 192.168.105.5
	I0912 14:49:12.535571    2230 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0912 14:49:13.540737    2230 machine.go:88] provisioning docker machine ...
	I0912 14:49:13.540751    2230 buildroot.go:166] provisioning hostname "image-477000"
	I0912 14:49:13.540791    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:13.541042    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:13.541046    2230 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-477000 && echo "image-477000" | sudo tee /etc/hostname
	I0912 14:49:13.618630    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: image-477000
	
	I0912 14:49:13.618688    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:13.618968    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:13.618974    2230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 14:49:13.692193    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 14:49:13.692202    2230 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17194-1051/.minikube CaCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17194-1051/.minikube}
	I0912 14:49:13.692221    2230 buildroot.go:174] setting up certificates
	I0912 14:49:13.692225    2230 provision.go:83] configureAuth start
	I0912 14:49:13.692228    2230 provision.go:138] copyHostCerts
	I0912 14:49:13.692291    2230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem, removing ...
	I0912 14:49:13.692295    2230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem
	I0912 14:49:13.692417    2230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem (1679 bytes)
	I0912 14:49:13.692583    2230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem, removing ...
	I0912 14:49:13.692585    2230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem
	I0912 14:49:13.692633    2230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem (1082 bytes)
	I0912 14:49:13.692721    2230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem, removing ...
	I0912 14:49:13.692723    2230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem
	I0912 14:49:13.692762    2230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem (1123 bytes)
	I0912 14:49:13.692832    2230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem org=jenkins.image-477000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-477000]
	I0912 14:49:13.746911    2230 provision.go:172] copyRemoteCerts
	I0912 14:49:13.746947    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 14:49:13.746954    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:13.784542    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 14:49:13.791755    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0912 14:49:13.799291    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 14:49:13.806202    2230 provision.go:86] duration metric: configureAuth took 113.975166ms
	I0912 14:49:13.806217    2230 buildroot.go:189] setting minikube options for container-runtime
	I0912 14:49:13.806332    2230 config.go:182] Loaded profile config "image-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:49:13.806376    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:13.806587    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:13.806590    2230 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 14:49:13.875395    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 14:49:13.875400    2230 buildroot.go:70] root file system type: tmpfs
	I0912 14:49:13.875455    2230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 14:49:13.875498    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:13.875752    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:13.875787    2230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 14:49:13.950301    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 14:49:13.950345    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:13.950615    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:13.950623    2230 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 14:49:14.323136    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 14:49:14.323151    2230 machine.go:91] provisioned docker machine in 782.423458ms
	I0912 14:49:14.323156    2230 client.go:171] LocalClient.Create took 14.070837041s
	I0912 14:49:14.323165    2230 start.go:167] duration metric: libmachine.API.Create for "image-477000" took 14.070876s
	I0912 14:49:14.323169    2230 start.go:300] post-start starting for "image-477000" (driver="qemu2")
	I0912 14:49:14.323173    2230 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 14:49:14.323252    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 14:49:14.323260    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:14.364328    2230 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 14:49:14.365806    2230 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 14:49:14.365815    2230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/addons for local assets ...
	I0912 14:49:14.365885    2230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/files for local assets ...
	I0912 14:49:14.365989    2230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem -> 14702.pem in /etc/ssl/certs
	I0912 14:49:14.366107    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 14:49:14.368702    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem --> /etc/ssl/certs/14702.pem (1708 bytes)
	I0912 14:49:14.375982    2230 start.go:303] post-start completed in 52.810625ms
	I0912 14:49:14.376314    2230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/config.json ...
	I0912 14:49:14.376473    2230 start.go:128] duration metric: createHost completed in 14.154692541s
	I0912 14:49:14.376523    2230 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:14.376735    2230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103350760] 0x103352ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0912 14:49:14.376738    2230 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 14:49:14.448255    2230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694555354.509512418
	
	I0912 14:49:14.448260    2230 fix.go:206] guest clock: 1694555354.509512418
	I0912 14:49:14.448267    2230 fix.go:219] Guest: 2023-09-12 14:49:14.509512418 -0700 PDT Remote: 2023-09-12 14:49:14.376474 -0700 PDT m=+14.257749585 (delta=133.038418ms)
	I0912 14:49:14.448279    2230 fix.go:190] guest clock delta is within tolerance: 133.038418ms
	I0912 14:49:14.448281    2230 start.go:83] releasing machines lock for "image-477000", held for 14.22653975s
	I0912 14:49:14.448574    2230 ssh_runner.go:195] Run: cat /version.json
	I0912 14:49:14.448580    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:14.448590    2230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 14:49:14.448606    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:14.528478    2230 ssh_runner.go:195] Run: systemctl --version
	I0912 14:49:14.530691    2230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 14:49:14.532686    2230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 14:49:14.532716    2230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 14:49:14.538104    2230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 14:49:14.538108    2230 start.go:469] detecting cgroup driver to use...
	I0912 14:49:14.538183    2230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:49:14.544046    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 14:49:14.547453    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 14:49:14.550459    2230 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 14:49:14.550492    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 14:49:14.553358    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:49:14.556526    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 14:49:14.559969    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:49:14.563557    2230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 14:49:14.567103    2230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 14:49:14.570015    2230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 14:49:14.572723    2230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 14:49:14.575915    2230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:14.649184    2230 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 14:49:14.658943    2230 start.go:469] detecting cgroup driver to use...
	I0912 14:49:14.659001    2230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 14:49:14.663865    2230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:49:14.668048    2230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 14:49:14.673786    2230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:49:14.678238    2230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:49:14.682824    2230 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 14:49:14.728033    2230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:49:14.733683    2230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:49:14.739277    2230 ssh_runner.go:195] Run: which cri-dockerd
	I0912 14:49:14.740426    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 14:49:14.743519    2230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 14:49:14.748577    2230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 14:49:14.828677    2230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 14:49:14.908719    2230 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 14:49:14.908728    2230 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 14:49:14.914461    2230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:14.999471    2230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:49:16.167408    2230 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.167947583s)
	I0912 14:49:16.167462    2230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 14:49:16.250435    2230 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 14:49:16.328013    2230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 14:49:16.409200    2230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:16.477549    2230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 14:49:16.484705    2230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:16.568780    2230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0912 14:49:16.592594    2230 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 14:49:16.592671    2230 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 14:49:16.595306    2230 start.go:537] Will wait 60s for crictl version
	I0912 14:49:16.595348    2230 ssh_runner.go:195] Run: which crictl
	I0912 14:49:16.596858    2230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 14:49:16.612078    2230 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0912 14:49:16.612160    2230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:49:16.621876    2230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:49:16.636815    2230 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0912 14:49:16.636886    2230 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0912 14:49:16.638355    2230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:49:16.641850    2230 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:49:16.641890    2230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:49:16.647070    2230 docker.go:636] Got preloaded images: 
	I0912 14:49:16.647074    2230 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0912 14:49:16.647103    2230 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:49:16.650100    2230 ssh_runner.go:195] Run: which lz4
	I0912 14:49:16.651422    2230 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0912 14:49:16.652787    2230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 14:49:16.652799    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0912 14:49:17.976040    2230 docker.go:600] Took 1.324680 seconds to copy over tarball
	I0912 14:49:17.976088    2230 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 14:49:18.997955    2230 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.021869083s)
	I0912 14:49:18.997966    2230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 14:49:19.013515    2230 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:49:19.016784    2230 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0912 14:49:19.022168    2230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:19.099292    2230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:49:20.543054    2230 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.443777333s)
	I0912 14:49:20.543131    2230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:49:20.549010    2230 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 14:49:20.549017    2230 cache_images.go:84] Images are preloaded, skipping loading
	I0912 14:49:20.549066    2230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 14:49:20.556741    2230 cni.go:84] Creating CNI manager for ""
	I0912 14:49:20.556750    2230 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:49:20.556766    2230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 14:49:20.556781    2230 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-477000 NodeName:image-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 14:49:20.556854    2230 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 14:49:20.556892    2230 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 14:49:20.556947    2230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 14:49:20.560197    2230 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 14:49:20.560228    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 14:49:20.563301    2230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0912 14:49:20.568245    2230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 14:49:20.573158    2230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0912 14:49:20.578130    2230 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0912 14:49:20.579404    2230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:49:20.583237    2230 certs.go:56] Setting up /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000 for IP: 192.168.105.5
	I0912 14:49:20.583244    2230 certs.go:190] acquiring lock for shared ca certs: {Name:mk62fa2aa67693071dd0720b8deb8309ed3c8567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.583376    2230 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key
	I0912 14:49:20.583412    2230 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key
	I0912 14:49:20.583440    2230 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.key
	I0912 14:49:20.583445    2230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.crt with IP's: []
	I0912 14:49:20.684179    2230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.crt ...
	I0912 14:49:20.684184    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.crt: {Name:mkfda629912e7a5be2eb41580d0a820bb5bbce86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.684421    2230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.key ...
	I0912 14:49:20.684423    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/client.key: {Name:mk58fb2bbbdc1d9773153b5146b049f83418a736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.684528    2230 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key.e69b33ca
	I0912 14:49:20.684534    2230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 14:49:20.734374    2230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt.e69b33ca ...
	I0912 14:49:20.734376    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt.e69b33ca: {Name:mk9ca280877ef6a4491b32a809f5146abe7ed605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.734504    2230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key.e69b33ca ...
	I0912 14:49:20.734506    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key.e69b33ca: {Name:mk7bf4166b034ca85a6c0433685a14bd8773e0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.734610    2230 certs.go:337] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt
	I0912 14:49:20.734690    2230 certs.go:341] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key
	I0912 14:49:20.734767    2230 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.key
	I0912 14:49:20.734772    2230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.crt with IP's: []
	I0912 14:49:20.867916    2230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.crt ...
	I0912 14:49:20.867918    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.crt: {Name:mkd1d8802ff1bb47d402131668b96f5f9566e998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.868052    2230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.key ...
	I0912 14:49:20.868054    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.key: {Name:mk4aa32f6c22f209cfc4122d085a6f1e6088e50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:20.868288    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470.pem (1338 bytes)
	W0912 14:49:20.868312    2230 certs.go:433] ignoring /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470_empty.pem, impossibly tiny 0 bytes
	I0912 14:49:20.868318    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 14:49:20.868339    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem (1082 bytes)
	I0912 14:49:20.868355    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem (1123 bytes)
	I0912 14:49:20.868372    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem (1679 bytes)
	I0912 14:49:20.868412    2230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem (1708 bytes)
	I0912 14:49:20.868719    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 14:49:20.876380    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 14:49:20.883570    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 14:49:20.890384    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/image-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 14:49:20.896767    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 14:49:20.903964    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 14:49:20.911346    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 14:49:20.917890    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 14:49:20.924442    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem --> /usr/share/ca-certificates/14702.pem (1708 bytes)
	I0912 14:49:20.931584    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 14:49:20.938566    2230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470.pem --> /usr/share/ca-certificates/1470.pem (1338 bytes)
	I0912 14:49:20.945143    2230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 14:49:20.950080    2230 ssh_runner.go:195] Run: openssl version
	I0912 14:49:20.952073    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14702.pem && ln -fs /usr/share/ca-certificates/14702.pem /etc/ssl/certs/14702.pem"
	I0912 14:49:20.955449    2230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14702.pem
	I0912 14:49:20.957008    2230 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:45 /usr/share/ca-certificates/14702.pem
	I0912 14:49:20.957025    2230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14702.pem
	I0912 14:49:20.958822    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14702.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 14:49:20.961649    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 14:49:20.964750    2230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:49:20.966209    2230 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:49:20.966225    2230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:49:20.967890    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 14:49:20.971052    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470.pem && ln -fs /usr/share/ca-certificates/1470.pem /etc/ssl/certs/1470.pem"
	I0912 14:49:20.974224    2230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470.pem
	I0912 14:49:20.975667    2230 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:45 /usr/share/ca-certificates/1470.pem
	I0912 14:49:20.975686    2230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470.pem
	I0912 14:49:20.977570    2230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1470.pem /etc/ssl/certs/51391683.0"
	I0912 14:49:20.980785    2230 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 14:49:20.982258    2230 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 14:49:20.982287    2230 kubeadm.go:404] StartCluster: {Name:image-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:49:20.982349    2230 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 14:49:20.987777    2230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 14:49:20.990984    2230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 14:49:20.993615    2230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 14:49:20.996606    2230 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 14:49:20.996618    2230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 14:49:21.019160    2230 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 14:49:21.019189    2230 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 14:49:21.073058    2230 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 14:49:21.073115    2230 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 14:49:21.073158    2230 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 14:49:21.132775    2230 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 14:49:21.137932    2230 out.go:204]   - Generating certificates and keys ...
	I0912 14:49:21.137978    2230 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 14:49:21.138006    2230 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 14:49:21.212631    2230 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 14:49:21.253074    2230 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 14:49:21.437791    2230 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 14:49:21.470046    2230 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 14:49:21.566508    2230 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 14:49:21.566571    2230 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-477000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0912 14:49:21.671057    2230 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 14:49:21.671115    2230 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-477000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0912 14:49:21.738894    2230 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 14:49:22.056097    2230 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 14:49:22.133452    2230 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 14:49:22.133483    2230 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 14:49:22.219770    2230 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 14:49:22.312637    2230 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 14:49:22.342046    2230 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 14:49:22.532274    2230 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 14:49:22.532490    2230 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 14:49:22.533691    2230 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 14:49:22.541960    2230 out.go:204]   - Booting up control plane ...
	I0912 14:49:22.542076    2230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 14:49:22.542137    2230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 14:49:22.542175    2230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 14:49:22.542253    2230 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 14:49:22.542491    2230 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 14:49:22.542506    2230 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 14:49:22.630363    2230 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 14:49:26.631940    2230 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001881 seconds
	I0912 14:49:26.631995    2230 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 14:49:26.636504    2230 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 14:49:27.145841    2230 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 14:49:27.145956    2230 kubeadm.go:322] [mark-control-plane] Marking the node image-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 14:49:27.650510    2230 kubeadm.go:322] [bootstrap-token] Using token: 1vdogc.kxxc4tt4v23z73tv
	I0912 14:49:27.657140    2230 out.go:204]   - Configuring RBAC rules ...
	I0912 14:49:27.657202    2230 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 14:49:27.658066    2230 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 14:49:27.664841    2230 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 14:49:27.665974    2230 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 14:49:27.667378    2230 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 14:49:27.668442    2230 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 14:49:27.672252    2230 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 14:49:27.842354    2230 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 14:49:28.059946    2230 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 14:49:28.060477    2230 kubeadm.go:322] 
	I0912 14:49:28.060506    2230 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 14:49:28.060508    2230 kubeadm.go:322] 
	I0912 14:49:28.060546    2230 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 14:49:28.060548    2230 kubeadm.go:322] 
	I0912 14:49:28.060563    2230 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 14:49:28.060596    2230 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 14:49:28.060624    2230 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 14:49:28.060627    2230 kubeadm.go:322] 
	I0912 14:49:28.060652    2230 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0912 14:49:28.060654    2230 kubeadm.go:322] 
	I0912 14:49:28.060682    2230 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 14:49:28.060684    2230 kubeadm.go:322] 
	I0912 14:49:28.060712    2230 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 14:49:28.060746    2230 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 14:49:28.060774    2230 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 14:49:28.060777    2230 kubeadm.go:322] 
	I0912 14:49:28.060824    2230 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 14:49:28.060856    2230 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 14:49:28.060864    2230 kubeadm.go:322] 
	I0912 14:49:28.060918    2230 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1vdogc.kxxc4tt4v23z73tv \
	I0912 14:49:28.060977    2230 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 \
	I0912 14:49:28.060986    2230 kubeadm.go:322] 	--control-plane 
	I0912 14:49:28.060988    2230 kubeadm.go:322] 
	I0912 14:49:28.061028    2230 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 14:49:28.061030    2230 kubeadm.go:322] 
	I0912 14:49:28.061073    2230 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vdogc.kxxc4tt4v23z73tv \
	I0912 14:49:28.061118    2230 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 
	I0912 14:49:28.061264    2230 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 14:49:28.061270    2230 cni.go:84] Creating CNI manager for ""
	I0912 14:49:28.061276    2230 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:49:28.069043    2230 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 14:49:28.073139    2230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 14:49:28.076344    2230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0912 14:49:28.081148    2230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 14:49:28.081187    2230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:49:28.081222    2230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=image-477000 minikube.k8s.io/updated_at=2023_09_12T14_49_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:49:28.141936    2230 ops.go:34] apiserver oom_adj: -16
	I0912 14:49:28.141955    2230 kubeadm.go:1081] duration metric: took 60.805334ms to wait for elevateKubeSystemPrivileges.
	I0912 14:49:28.141960    2230 kubeadm.go:406] StartCluster complete in 7.159818625s
	I0912 14:49:28.141973    2230 settings.go:142] acquiring lock: {Name:mke2a1c2b91a69fc9538d2ab9217887ccaa535ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:28.142044    2230 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:49:28.143190    2230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/kubeconfig: {Name:mk92e8fca531d1e53b216ab5c46209b819337697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:28.143402    2230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 14:49:28.143436    2230 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 14:49:28.143487    2230 addons.go:69] Setting storage-provisioner=true in profile "image-477000"
	I0912 14:49:28.143493    2230 addons.go:231] Setting addon storage-provisioner=true in "image-477000"
	I0912 14:49:28.143498    2230 config.go:182] Loaded profile config "image-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:49:28.143502    2230 addons.go:69] Setting default-storageclass=true in profile "image-477000"
	I0912 14:49:28.143507    2230 host.go:66] Checking if "image-477000" exists ...
	I0912 14:49:28.143527    2230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-477000"
	I0912 14:49:28.148471    2230 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:49:28.150319    2230 addons.go:231] Setting addon default-storageclass=true in "image-477000"
	I0912 14:49:28.152535    2230 host.go:66] Checking if "image-477000" exists ...
	I0912 14:49:28.152540    2230 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:49:28.152545    2230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 14:49:28.152553    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:28.153318    2230 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 14:49:28.153321    2230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 14:49:28.153325    2230 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/image-477000/id_rsa Username:docker}
	I0912 14:49:28.154175    2230 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-477000" context rescaled to 1 replicas
	I0912 14:49:28.154185    2230 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:49:28.161452    2230 out.go:177] * Verifying Kubernetes components...
	I0912 14:49:28.165550    2230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 14:49:28.192647    2230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 14:49:28.193008    2230 api_server.go:52] waiting for apiserver process to appear ...
	I0912 14:49:28.193035    2230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 14:49:28.203688    2230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 14:49:28.238783    2230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:49:28.605108    2230 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0912 14:49:28.605122    2230 api_server.go:72] duration metric: took 450.935834ms to wait for apiserver process to appear ...
	I0912 14:49:28.605126    2230 api_server.go:88] waiting for apiserver healthz status ...
	I0912 14:49:28.605132    2230 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0912 14:49:28.608342    2230 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0912 14:49:28.609109    2230 api_server.go:141] control plane version: v1.28.1
	I0912 14:49:28.609113    2230 api_server.go:131] duration metric: took 3.985417ms to wait for apiserver health ...
	I0912 14:49:28.609117    2230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 14:49:28.612142    2230 system_pods.go:59] 4 kube-system pods found
	I0912 14:49:28.612148    2230 system_pods.go:61] "etcd-image-477000" [d3460da6-7f71-4a2a-9e7e-223def62e0b9] Pending
	I0912 14:49:28.612150    2230 system_pods.go:61] "kube-apiserver-image-477000" [82088ef1-d084-4bf6-b480-ecee9c0806c2] Pending
	I0912 14:49:28.612152    2230 system_pods.go:61] "kube-controller-manager-image-477000" [2bf53564-4a70-4c51-a3e6-db866c78b5ef] Pending
	I0912 14:49:28.612154    2230 system_pods.go:61] "kube-scheduler-image-477000" [e89f33a4-0fa3-4d5f-ac5a-1f490f4efe1d] Pending
	I0912 14:49:28.612156    2230 system_pods.go:74] duration metric: took 3.036917ms to wait for pod list to return data ...
	I0912 14:49:28.612159    2230 kubeadm.go:581] duration metric: took 457.974917ms to wait for : map[apiserver:true system_pods:true] ...
	I0912 14:49:28.612164    2230 node_conditions.go:102] verifying NodePressure condition ...
	I0912 14:49:28.613595    2230 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0912 14:49:28.613601    2230 node_conditions.go:123] node cpu capacity is 2
	I0912 14:49:28.613606    2230 node_conditions.go:105] duration metric: took 1.440417ms to run NodePressure ...
	I0912 14:49:28.613610    2230 start.go:228] waiting for startup goroutines ...
	I0912 14:49:28.683923    2230 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0912 14:49:28.687944    2230 addons.go:502] enable addons completed in 544.533042ms: enabled=[default-storageclass storage-provisioner]
	I0912 14:49:28.687954    2230 start.go:233] waiting for cluster config update ...
	I0912 14:49:28.687959    2230 start.go:242] writing updated cluster config ...
	I0912 14:49:28.688199    2230 ssh_runner.go:195] Run: rm -f paused
	I0912 14:49:28.717027    2230 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0912 14:49:28.721013    2230 out.go:177] * Done! kubectl is now configured to use "image-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-12 21:49:11 UTC, ends at Tue 2023-09-12 21:49:31 UTC. --
	Sep 12 21:49:23 image-477000 cri-dockerd[1001]: time="2023-09-12T21:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8433bd7958fe646e4a16c88652e52b71df2323a424ecfa9e9aa49f43ed5c738/resolv.conf as [nameserver 192.168.105.1]"
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.640215297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.640288422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.640311256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.640322797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.655254006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.655365506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.655379547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.655388589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:23 image-477000 cri-dockerd[1001]: time="2023-09-12T21:49:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7f5e2bb2903834cfbcc7dfb7807f9e18b03b20f1582696fe6a019a8c33170592/resolv.conf as [nameserver 192.168.105.1]"
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.751253172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.751398922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.751429672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:49:23 image-477000 dockerd[1112]: time="2023-09-12T21:49:23.751453839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:30 image-477000 dockerd[1106]: time="2023-09-12T21:49:30.387972342Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 12 21:49:30 image-477000 dockerd[1106]: time="2023-09-12T21:49:30.510858717Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 12 21:49:30 image-477000 dockerd[1106]: time="2023-09-12T21:49:30.530558134Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.564521717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.564548926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.564557509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.564563342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:49:30 image-477000 dockerd[1106]: time="2023-09-12T21:49:30.693687842Z" level=info msg="ignoring event" container=927c5d5ddee0477da5ad0c272a9674ef458c9eef2190faac3c550692b4f46d59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.693906551Z" level=info msg="shim disconnected" id=927c5d5ddee0477da5ad0c272a9674ef458c9eef2190faac3c550692b4f46d59 namespace=moby
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.693936926Z" level=warning msg="cleaning up after shim disconnected" id=927c5d5ddee0477da5ad0c272a9674ef458c9eef2190faac3c550692b4f46d59 namespace=moby
	Sep 12 21:49:30 image-477000 dockerd[1112]: time="2023-09-12T21:49:30.693941134Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	dbdc65b79b118       9cdd6470f48c8       8 seconds ago       Running             etcd                      0                   7f5e2bb290383
	4da6492897725       8b6e1980b7584       8 seconds ago       Running             kube-controller-manager   0                   a8433bd7958fe
	74e09fed8f547       b29fb62480892       8 seconds ago       Running             kube-apiserver            0                   a4c874a9192c4
	681fcd20940c3       b4a5a57e99492       8 seconds ago       Running             kube-scheduler            0                   d176e503c7423
	
	* 
	* ==> describe nodes <==
	* Name:               image-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=image-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T14_49_28_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 21:49:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-477000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 21:49:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 21:49:28 +0000   Tue, 12 Sep 2023 21:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 21:49:28 +0000   Tue, 12 Sep 2023 21:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 21:49:28 +0000   Tue, 12 Sep 2023 21:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Sep 2023 21:49:28 +0000   Tue, 12 Sep 2023 21:49:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 936c148049aa4448923bfe46fc98364b
	  System UUID:                936c148049aa4448923bfe46fc98364b
	  Boot ID:                    cd9f6930-113b-46a9-adc1-9e932e991575
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-477000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep12 21:49] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.670047] EINJ: EINJ table not found.
	[  +0.524703] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043436] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000879] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.090999] systemd-fstab-generator[486]: Ignoring "noauto" for root device
	[  +0.086777] systemd-fstab-generator[497]: Ignoring "noauto" for root device
	[  +0.457038] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.179221] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[  +0.080802] systemd-fstab-generator[721]: Ignoring "noauto" for root device
	[  +0.091732] systemd-fstab-generator[734]: Ignoring "noauto" for root device
	[  +1.148811] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.101570] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[  +0.075830] systemd-fstab-generator[931]: Ignoring "noauto" for root device
	[  +0.080596] systemd-fstab-generator[942]: Ignoring "noauto" for root device
	[  +0.071164] systemd-fstab-generator[953]: Ignoring "noauto" for root device
	[  +0.090546] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +2.528706] systemd-fstab-generator[1099]: Ignoring "noauto" for root device
	[  +3.527508] systemd-fstab-generator[1426]: Ignoring "noauto" for root device
	[  +0.222338] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.902949] systemd-fstab-generator[2333]: Ignoring "noauto" for root device
	[  +2.771350] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [dbdc65b79b11] <==
	* {"level":"info","ts":"2023-09-12T21:49:23.866184Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T21:49:23.866212Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T21:49:23.866534Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-12T21:49:23.866585Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-12T21:49:23.866699Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"58de0efec1d86300","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-09-12T21:49:23.866967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-12T21:49:23.86704Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-12T21:49:24.652565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-12T21:49:24.652647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-12T21:49:24.652714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-12T21:49:24.652774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-12T21:49:24.65279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-12T21:49:24.65281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-12T21:49:24.652827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-12T21:49:24.653787Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-477000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T21:49:24.653837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:49:24.653963Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T21:49:24.65402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T21:49:24.653789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T21:49:24.654547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T21:49:24.654731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-12T21:49:24.653807Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:49:24.655216Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:49:24.655255Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T21:49:24.655267Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  21:49:31 up 0 min,  0 users,  load average: 0.06, 0.02, 0.00
	Linux image-477000 5.10.57 #1 SMP PREEMPT Mon Sep 11 23:30:27 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74e09fed8f54] <==
	* I0912 21:49:25.273850       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 21:49:25.273868       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 21:49:25.280006       1 controller.go:624] quota admission added evaluator for: namespaces
	I0912 21:49:25.280022       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 21:49:25.283735       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 21:49:25.295211       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 21:49:25.295293       1 aggregator.go:166] initial CRD sync complete...
	I0912 21:49:25.295332       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 21:49:25.295354       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 21:49:25.295369       1 cache.go:39] Caches are synced for autoregister controller
	I0912 21:49:25.310716       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 21:49:25.339796       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0912 21:49:26.181914       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0912 21:49:26.183146       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0912 21:49:26.183156       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 21:49:26.338122       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 21:49:26.351047       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 21:49:26.395209       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0912 21:49:26.397269       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0912 21:49:26.397572       1 controller.go:624] quota admission added evaluator for: endpoints
	I0912 21:49:26.399455       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:49:27.237251       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 21:49:27.899412       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 21:49:27.903207       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0912 21:49:27.907761       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [4da649289772] <==
	* I0912 21:49:24.590147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0912 21:49:27.233651       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0912 21:49:27.238286       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0912 21:49:27.238359       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0912 21:49:27.238366       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0912 21:49:27.240956       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0912 21:49:27.241024       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0912 21:49:27.241028       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0912 21:49:27.250373       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0912 21:49:27.250392       1 namespace_controller.go:197] "Starting namespace controller"
	I0912 21:49:27.250398       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0912 21:49:27.252836       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0912 21:49:27.252894       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0912 21:49:27.252898       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0912 21:49:27.255450       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0912 21:49:27.255523       1 ttl_controller.go:124] "Starting TTL controller"
	I0912 21:49:27.255532       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0912 21:49:27.258624       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0912 21:49:27.258977       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0912 21:49:27.258992       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0912 21:49:27.264520       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0912 21:49:27.264547       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0912 21:49:27.264526       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0912 21:49:27.264615       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0912 21:49:27.334613       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [681fcd20940c] <==
	* W0912 21:49:25.269645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:49:25.269649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0912 21:49:25.269666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:49:25.269673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 21:49:25.269667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:49:25.269678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0912 21:49:25.269695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:49:25.269699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0912 21:49:25.269714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:49:25.269718       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 21:49:25.269734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:49:25.269741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 21:49:25.269756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:49:25.269763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 21:49:25.269800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:49:25.269804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0912 21:49:25.269845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:49:25.269853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0912 21:49:26.113120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:49:26.113136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0912 21:49:26.155093       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:49:26.155102       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:49:26.298916       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:49:26.298977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0912 21:49:28.264436       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 21:49:11 UTC, ends at Tue 2023-09-12 21:49:31 UTC. --
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.043693    2339 kubelet_node_status.go:70] "Attempting to register node" node="image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.048194    2339 kubelet_node_status.go:108] "Node was previously registered" node="image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.048230    2339 kubelet_node_status.go:73] "Successfully registered node" node="image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.071209    2339 topology_manager.go:215] "Topology Admit Handler" podUID="04a8fd5e100f7aa6635b68cf4d3995f8" podNamespace="kube-system" podName="etcd-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.071284    2339 topology_manager.go:215] "Topology Admit Handler" podUID="aed546e5fce1a32a1308fdb993a433ef" podNamespace="kube-system" podName="kube-apiserver-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.071304    2339 topology_manager.go:215] "Topology Admit Handler" podUID="6d90b839ace100fdf1872b650d8e31c3" podNamespace="kube-system" podName="kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.071316    2339 topology_manager.go:215] "Topology Admit Handler" podUID="7831e4d668819d0eb2d7e68635c02ac3" podNamespace="kube-system" podName="kube-scheduler-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.242772    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d90b839ace100fdf1872b650d8e31c3-k8s-certs\") pod \"kube-controller-manager-image-477000\" (UID: \"6d90b839ace100fdf1872b650d8e31c3\") " pod="kube-system/kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.242792    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d90b839ace100fdf1872b650d8e31c3-kubeconfig\") pod \"kube-controller-manager-image-477000\" (UID: \"6d90b839ace100fdf1872b650d8e31c3\") " pod="kube-system/kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.242805    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d90b839ace100fdf1872b650d8e31c3-usr-share-ca-certificates\") pod \"kube-controller-manager-image-477000\" (UID: \"6d90b839ace100fdf1872b650d8e31c3\") " pod="kube-system/kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.242814    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7831e4d668819d0eb2d7e68635c02ac3-kubeconfig\") pod \"kube-scheduler-image-477000\" (UID: \"7831e4d668819d0eb2d7e68635c02ac3\") " pod="kube-system/kube-scheduler-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.242961    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/04a8fd5e100f7aa6635b68cf4d3995f8-etcd-data\") pod \"etcd-image-477000\" (UID: \"04a8fd5e100f7aa6635b68cf4d3995f8\") " pod="kube-system/etcd-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243065    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aed546e5fce1a32a1308fdb993a433ef-k8s-certs\") pod \"kube-apiserver-image-477000\" (UID: \"aed546e5fce1a32a1308fdb993a433ef\") " pod="kube-system/kube-apiserver-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243079    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d90b839ace100fdf1872b650d8e31c3-ca-certs\") pod \"kube-controller-manager-image-477000\" (UID: \"6d90b839ace100fdf1872b650d8e31c3\") " pod="kube-system/kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243093    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d90b839ace100fdf1872b650d8e31c3-flexvolume-dir\") pod \"kube-controller-manager-image-477000\" (UID: \"6d90b839ace100fdf1872b650d8e31c3\") " pod="kube-system/kube-controller-manager-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243149    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/04a8fd5e100f7aa6635b68cf4d3995f8-etcd-certs\") pod \"etcd-image-477000\" (UID: \"04a8fd5e100f7aa6635b68cf4d3995f8\") " pod="kube-system/etcd-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243159    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aed546e5fce1a32a1308fdb993a433ef-ca-certs\") pod \"kube-apiserver-image-477000\" (UID: \"aed546e5fce1a32a1308fdb993a433ef\") " pod="kube-system/kube-apiserver-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.243413    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aed546e5fce1a32a1308fdb993a433ef-usr-share-ca-certificates\") pod \"kube-apiserver-image-477000\" (UID: \"aed546e5fce1a32a1308fdb993a433ef\") " pod="kube-system/kube-apiserver-image-477000"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.930576    2339 apiserver.go:52] "Watching apiserver"
	Sep 12 21:49:28 image-477000 kubelet[2339]: I0912 21:49:28.941470    2339 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 12 21:49:28 image-477000 kubelet[2339]: E0912 21:49:28.999495    2339 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-477000\" already exists" pod="kube-system/kube-apiserver-image-477000"
	Sep 12 21:49:29 image-477000 kubelet[2339]: I0912 21:49:29.001188    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-477000" podStartSLOduration=1.000719633 podCreationTimestamp="2023-09-12 21:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 21:49:29.0006858 +0000 UTC m=+1.114855252" watchObservedRunningTime="2023-09-12 21:49:29.000719633 +0000 UTC m=+1.114889085"
	Sep 12 21:49:29 image-477000 kubelet[2339]: I0912 21:49:29.004273    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-477000" podStartSLOduration=1.004241592 podCreationTimestamp="2023-09-12 21:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 21:49:29.004176467 +0000 UTC m=+1.118345918" watchObservedRunningTime="2023-09-12 21:49:29.004241592 +0000 UTC m=+1.118411043"
	Sep 12 21:49:29 image-477000 kubelet[2339]: I0912 21:49:29.013184    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-477000" podStartSLOduration=1.013164175 podCreationTimestamp="2023-09-12 21:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 21:49:29.008116925 +0000 UTC m=+1.122286377" watchObservedRunningTime="2023-09-12 21:49:29.013164175 +0000 UTC m=+1.127333585"
	Sep 12 21:49:29 image-477000 kubelet[2339]: I0912 21:49:29.013316    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-477000" podStartSLOduration=1.013219758 podCreationTimestamp="2023-09-12 21:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 21:49:29.012184133 +0000 UTC m=+1.126353585" watchObservedRunningTime="2023-09-12 21:49:29.013219758 +0000 UTC m=+1.127389210"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-477000 -n image-477000
helpers_test.go:261: (dbg) Run:  kubectl --context image-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-477000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-477000 describe pod storage-provisioner: exit status 1 (37.7305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-477000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (54.07s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-627000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-627000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.600143125s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-627000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-627000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66600f0d-82da-4f6c-b861-c83aab37b01a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66600f0d-82da-4f6c-b861-c83aab37b01a] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.015889667s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-627000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.026593708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons disable ingress-dns --alsologtostderr -v=1: (7.065701875s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons disable ingress --alsologtostderr -v=1: (7.104667875s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-627000 -n ingress-addon-legacy-627000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | -p functional-737000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-737000 ssh pgrep              | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-737000 image build -t         | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | localhost/my-image:functional-737000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-737000 image ls               | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	| image          | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-737000                        | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:48 PDT | 12 Sep 23 14:48 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-737000                     | functional-737000           | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	| start          | -p image-477000 --driver=qemu2           | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-477000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-477000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-477000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-477000                          |                             |         |         |                     |                     |
	| delete         | -p image-477000                          | image-477000                | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:49 PDT |
	| start          | -p ingress-addon-legacy-627000           | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:49 PDT | 12 Sep 23 14:50 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-627000              | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:50 PDT | 12 Sep 23 14:51 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-627000              | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:51 PDT | 12 Sep 23 14:51 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-627000              | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:51 PDT | 12 Sep 23 14:51 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-627000 ip           | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:51 PDT | 12 Sep 23 14:51 PDT |
	| addons         | ingress-addon-legacy-627000              | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:51 PDT | 12 Sep 23 14:51 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-627000              | ingress-addon-legacy-627000 | jenkins | v1.31.2 | 12 Sep 23 14:51 PDT | 12 Sep 23 14:52 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 14:49:31
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:49:31.765680    2272 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:49:31.765920    2272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:49:31.765923    2272 out.go:309] Setting ErrFile to fd 2...
	I0912 14:49:31.765926    2272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:49:31.766087    2272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:49:31.768443    2272 out.go:303] Setting JSON to false
	I0912 14:49:31.786374    2272 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1145,"bootTime":1694554226,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:49:31.786463    2272 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:49:31.791101    2272 out.go:177] * [ingress-addon-legacy-627000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:49:31.796897    2272 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:49:31.796934    2272 notify.go:220] Checking for updates...
	I0912 14:49:31.801920    2272 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:49:31.804927    2272 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:49:31.811948    2272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:49:31.814935    2272 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:49:31.817928    2272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:49:31.821110    2272 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:49:31.823818    2272 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:49:31.830978    2272 start.go:298] selected driver: qemu2
	I0912 14:49:31.830983    2272 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:49:31.830990    2272 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:49:31.833108    2272 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:49:31.835879    2272 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:49:31.839096    2272 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:49:31.839121    2272 cni.go:84] Creating CNI manager for ""
	I0912 14:49:31.839131    2272 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:49:31.839135    2272 start_flags.go:321] config:
	{Name:ingress-addon-legacy-627000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-627000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:49:31.843705    2272 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:49:31.850920    2272 out.go:177] * Starting control plane node ingress-addon-legacy-627000 in cluster ingress-addon-legacy-627000
	I0912 14:49:31.854908    2272 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0912 14:49:32.068725    2272 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0912 14:49:32.068807    2272 cache.go:57] Caching tarball of preloaded images
	I0912 14:49:32.069545    2272 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0912 14:49:32.075029    2272 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0912 14:49:32.083065    2272 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:49:32.311022    2272 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0912 14:49:43.539583    2272 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:49:43.539722    2272 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:49:44.286149    2272 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0912 14:49:44.286332    2272 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/config.json ...
	I0912 14:49:44.286352    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/config.json: {Name:mk168e9d418fc512d0e36bfbd9dc2a151e5aeca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:49:44.286579    2272 start.go:365] acquiring machines lock for ingress-addon-legacy-627000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:49:44.286610    2272 start.go:369] acquired machines lock for "ingress-addon-legacy-627000" in 20µs
	I0912 14:49:44.286620    2272 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-627000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:49:44.286657    2272 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:49:44.291742    2272 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0912 14:49:44.305918    2272 start.go:159] libmachine.API.Create for "ingress-addon-legacy-627000" (driver="qemu2")
	I0912 14:49:44.305936    2272 client.go:168] LocalClient.Create starting
	I0912 14:49:44.305996    2272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:49:44.306026    2272 main.go:141] libmachine: Decoding PEM data...
	I0912 14:49:44.306035    2272 main.go:141] libmachine: Parsing certificate...
	I0912 14:49:44.306070    2272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:49:44.306089    2272 main.go:141] libmachine: Decoding PEM data...
	I0912 14:49:44.306098    2272 main.go:141] libmachine: Parsing certificate...
	I0912 14:49:44.306388    2272 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:49:44.450600    2272 main.go:141] libmachine: Creating SSH key...
	I0912 14:49:44.561286    2272 main.go:141] libmachine: Creating Disk image...
	I0912 14:49:44.561291    2272 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:49:44.561429    2272 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2
	I0912 14:49:44.570152    2272 main.go:141] libmachine: STDOUT: 
	I0912 14:49:44.570178    2272 main.go:141] libmachine: STDERR: 
	I0912 14:49:44.570237    2272 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2 +20000M
	I0912 14:49:44.577545    2272 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:49:44.577557    2272 main.go:141] libmachine: STDERR: 
	I0912 14:49:44.577580    2272 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2
	I0912 14:49:44.577587    2272 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:49:44.577629    2272 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:58:84:7c:dd:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/disk.qcow2
	I0912 14:49:44.611878    2272 main.go:141] libmachine: STDOUT: 
	I0912 14:49:44.611901    2272 main.go:141] libmachine: STDERR: 
	I0912 14:49:44.611905    2272 main.go:141] libmachine: Attempt 0
	I0912 14:49:44.611918    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:44.611990    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:44.612012    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:44.612020    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:44.612041    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:44.612047    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:46.614179    2272 main.go:141] libmachine: Attempt 1
	I0912 14:49:46.614300    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:46.614581    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:46.614633    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:46.614669    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:46.614701    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:46.614765    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:48.617013    2272 main.go:141] libmachine: Attempt 2
	I0912 14:49:48.617081    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:48.617194    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:48.617206    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:48.617214    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:48.617220    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:48.617225    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:50.619300    2272 main.go:141] libmachine: Attempt 3
	I0912 14:49:50.619329    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:50.619393    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:50.619405    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:50.619410    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:50.619416    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:50.619423    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:52.621430    2272 main.go:141] libmachine: Attempt 4
	I0912 14:49:52.621441    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:52.621471    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:52.621478    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:52.621482    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:52.621487    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:52.621493    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:54.623510    2272 main.go:141] libmachine: Attempt 5
	I0912 14:49:54.623531    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:54.623610    2272 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0912 14:49:54.623621    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:52:77:fd:2f:2f:31 ID:1,52:77:fd:2f:2f:31 Lease:0x65022e57}
	I0912 14:49:54.623627    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:82:d7:33:ed:16:83 ID:1,82:d7:33:ed:16:83 Lease:0x65022d72}
	I0912 14:49:54.623633    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:92:55:8:ba:4f:b7 ID:1,92:55:8:ba:4f:b7 Lease:0x6500dbe6}
	I0912 14:49:54.623638    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:b8:71:61:13:50 ID:1,7e:b8:71:61:13:50 Lease:0x65022d1a}
	I0912 14:49:56.625694    2272 main.go:141] libmachine: Attempt 6
	I0912 14:49:56.625730    2272 main.go:141] libmachine: Searching for da:58:84:7c:dd:d8 in /var/db/dhcpd_leases ...
	I0912 14:49:56.625808    2272 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0912 14:49:56.625841    2272 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:da:58:84:7c:dd:d8 ID:1,da:58:84:7c:dd:d8 Lease:0x65022e83}
	I0912 14:49:56.625848    2272 main.go:141] libmachine: Found match: da:58:84:7c:dd:d8
	I0912 14:49:56.625856    2272 main.go:141] libmachine: IP: 192.168.105.6
	I0912 14:49:56.625863    2272 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0912 14:49:58.645395    2272 machine.go:88] provisioning docker machine ...
	I0912 14:49:58.645468    2272 buildroot.go:166] provisioning hostname "ingress-addon-legacy-627000"
	I0912 14:49:58.645666    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:58.646601    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:58.646635    2272 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-627000 && echo "ingress-addon-legacy-627000" | sudo tee /etc/hostname
	I0912 14:49:58.740686    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-627000
	
	I0912 14:49:58.740842    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:58.741350    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:58.741371    2272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-627000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-627000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-627000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 14:49:58.810288    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 14:49:58.810307    2272 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17194-1051/.minikube CaCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17194-1051/.minikube}
	I0912 14:49:58.810328    2272 buildroot.go:174] setting up certificates
	I0912 14:49:58.810337    2272 provision.go:83] configureAuth start
	I0912 14:49:58.810344    2272 provision.go:138] copyHostCerts
	I0912 14:49:58.810391    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem
	I0912 14:49:58.810476    2272 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem, removing ...
	I0912 14:49:58.810485    2272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem
	I0912 14:49:58.810658    2272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/cert.pem (1123 bytes)
	I0912 14:49:58.810882    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem
	I0912 14:49:58.810913    2272 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem, removing ...
	I0912 14:49:58.810917    2272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem
	I0912 14:49:58.810980    2272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/key.pem (1679 bytes)
	I0912 14:49:58.811100    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem
	I0912 14:49:58.811130    2272 exec_runner.go:144] found /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem, removing ...
	I0912 14:49:58.811134    2272 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem
	I0912 14:49:58.811189    2272 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.pem (1082 bytes)
	I0912 14:49:58.811302    2272 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-627000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-627000]
	I0912 14:49:58.888126    2272 provision.go:172] copyRemoteCerts
	I0912 14:49:58.888156    2272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 14:49:58.888163    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:49:58.920854    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 14:49:58.920909    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 14:49:58.928291    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 14:49:58.928329    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0912 14:49:58.935227    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 14:49:58.935274    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 14:49:58.942143    2272 provision.go:86] duration metric: configureAuth took 131.796625ms
	I0912 14:49:58.942151    2272 buildroot.go:189] setting minikube options for container-runtime
	I0912 14:49:58.942256    2272 config.go:182] Loaded profile config "ingress-addon-legacy-627000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0912 14:49:58.942302    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:58.942524    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:58.942529    2272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 14:49:59.003929    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0912 14:49:59.003939    2272 buildroot.go:70] root file system type: tmpfs
	I0912 14:49:59.003994    2272 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 14:49:59.004039    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:59.004305    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:59.004344    2272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 14:49:59.073067    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 14:49:59.073126    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:59.073419    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:59.073429    2272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 14:49:59.433292    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0912 14:49:59.433304    2272 machine.go:91] provisioned docker machine in 787.8935ms
	I0912 14:49:59.433310    2272 client.go:171] LocalClient.Create took 15.127673459s
	I0912 14:49:59.433323    2272 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-627000" took 15.127711208s
	I0912 14:49:59.433329    2272 start.go:300] post-start starting for "ingress-addon-legacy-627000" (driver="qemu2")
	I0912 14:49:59.433334    2272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 14:49:59.433396    2272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 14:49:59.433405    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:49:59.464772    2272 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 14:49:59.466089    2272 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 14:49:59.466098    2272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/addons for local assets ...
	I0912 14:49:59.466173    2272 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17194-1051/.minikube/files for local assets ...
	I0912 14:49:59.466280    2272 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem -> 14702.pem in /etc/ssl/certs
	I0912 14:49:59.466285    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem -> /etc/ssl/certs/14702.pem
	I0912 14:49:59.466391    2272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 14:49:59.472651    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem --> /etc/ssl/certs/14702.pem (1708 bytes)
	I0912 14:49:59.480238    2272 start.go:303] post-start completed in 46.900834ms
	I0912 14:49:59.480642    2272 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/config.json ...
	I0912 14:49:59.480797    2272 start.go:128] duration metric: createHost completed in 15.194439459s
	I0912 14:49:59.480826    2272 main.go:141] libmachine: Using SSH client type: native
	I0912 14:49:59.481050    2272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1026b8760] 0x1026baed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0912 14:49:59.481055    2272 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 14:49:59.541737    2272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694555399.524825919
	
	I0912 14:49:59.541746    2272 fix.go:206] guest clock: 1694555399.524825919
	I0912 14:49:59.541750    2272 fix.go:219] Guest: 2023-09-12 14:49:59.524825919 -0700 PDT Remote: 2023-09-12 14:49:59.480801 -0700 PDT m=+27.738288001 (delta=44.024919ms)
	I0912 14:49:59.541761    2272 fix.go:190] guest clock delta is within tolerance: 44.024919ms
	I0912 14:49:59.541764    2272 start.go:83] releasing machines lock for "ingress-addon-legacy-627000", held for 15.255454917s
	I0912 14:49:59.542066    2272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 14:49:59.542087    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:49:59.542066    2272 ssh_runner.go:195] Run: cat /version.json
	I0912 14:49:59.542106    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:49:59.617424    2272 ssh_runner.go:195] Run: systemctl --version
	I0912 14:49:59.619538    2272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 14:49:59.621508    2272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 14:49:59.621544    2272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0912 14:49:59.624866    2272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0912 14:49:59.629796    2272 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 14:49:59.629803    2272 start.go:469] detecting cgroup driver to use...
	I0912 14:49:59.629879    2272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:49:59.637159    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0912 14:49:59.640237    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 14:49:59.643122    2272 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 14:49:59.643150    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 14:49:59.646225    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:49:59.649763    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 14:49:59.653029    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 14:49:59.656301    2272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 14:49:59.659022    2272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 14:49:59.662034    2272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 14:49:59.665160    2272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 14:49:59.667839    2272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:49:59.752007    2272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 14:49:59.760349    2272 start.go:469] detecting cgroup driver to use...
	I0912 14:49:59.760404    2272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 14:49:59.771482    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:49:59.775915    2272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 14:49:59.783519    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 14:49:59.788429    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:49:59.792949    2272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 14:49:59.830019    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 14:49:59.835441    2272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 14:49:59.840522    2272 ssh_runner.go:195] Run: which cri-dockerd
	I0912 14:49:59.841882    2272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 14:49:59.844951    2272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0912 14:49:59.849719    2272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 14:49:59.933074    2272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 14:50:00.009937    2272 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 14:50:00.009949    2272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0912 14:50:00.015673    2272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:50:00.095335    2272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:50:01.259419    2272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164088958s)
	I0912 14:50:01.259485    2272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:50:01.269181    2272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 14:50:01.285029    2272 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0912 14:50:01.285132    2272 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0912 14:50:01.286541    2272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:50:01.290375    2272 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0912 14:50:01.290423    2272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:50:01.295960    2272 docker.go:636] Got preloaded images: 
	I0912 14:50:01.295969    2272 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0912 14:50:01.296007    2272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:50:01.298784    2272 ssh_runner.go:195] Run: which lz4
	I0912 14:50:01.299924    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0912 14:50:01.300017    2272 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0912 14:50:01.301144    2272 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 14:50:01.301155    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0912 14:50:03.007627    2272 docker.go:600] Took 1.707689 seconds to copy over tarball
	I0912 14:50:03.007680    2272 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 14:50:04.310637    2272 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.302965s)
	I0912 14:50:04.310654    2272 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 14:50:04.338823    2272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0912 14:50:04.343728    2272 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0912 14:50:04.349938    2272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 14:50:04.431809    2272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 14:50:05.923740    2272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.491939s)
	I0912 14:50:05.923826    2272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 14:50:05.929570    2272 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0912 14:50:05.929581    2272 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0912 14:50:05.929586    2272 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 14:50:05.942469    2272 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0912 14:50:05.942497    2272 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 14:50:05.942562    2272 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:05.942714    2272 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 14:50:05.942808    2272 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0912 14:50:05.943005    2272 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 14:50:05.943175    2272 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0912 14:50:05.948189    2272 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0912 14:50:05.952815    2272 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 14:50:05.955608    2272 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 14:50:05.955620    2272 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:05.955674    2272 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 14:50:05.955689    2272 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0912 14:50:05.955698    2272 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 14:50:05.955727    2272 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0912 14:50:05.956947    2272 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W0912 14:50:06.502064    2272 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:06.502170    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0912 14:50:06.508571    2272 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0912 14:50:06.508594    2272 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 14:50:06.508647    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0912 14:50:06.514428    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0912 14:50:06.560163    2272 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:06.560267    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 14:50:06.566593    2272 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0912 14:50:06.566618    2272 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 14:50:06.566655    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 14:50:06.572622    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0912 14:50:06.974807    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 14:50:06.981149    2272 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0912 14:50:06.981176    2272 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0912 14:50:06.981230    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0912 14:50:06.987079    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0912 14:50:07.182730    2272 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:07.182860    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0912 14:50:07.189550    2272 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0912 14:50:07.189570    2272 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0912 14:50:07.189645    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0912 14:50:07.196046    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0912 14:50:07.399613    2272 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:07.399720    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0912 14:50:07.406082    2272 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0912 14:50:07.406110    2272 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 14:50:07.406154    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0912 14:50:07.412155    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0912 14:50:07.449002    2272 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:07.449115    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:07.455466    2272 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0912 14:50:07.455490    2272 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:07.455529    2272 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:07.466569    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0912 14:50:07.613715    2272 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:07.613854    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0912 14:50:07.620211    2272 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0912 14:50:07.620238    2272 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0912 14:50:07.620282    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0912 14:50:07.626246    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0912 14:50:07.803676    2272 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0912 14:50:07.804114    2272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0912 14:50:07.820236    2272 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0912 14:50:07.820277    2272 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0912 14:50:07.820388    2272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0912 14:50:07.831076    2272 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0912 14:50:07.831136    2272 cache_images.go:92] LoadImages completed in 1.90158125s
	W0912 14:50:07.831221    2272 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0912 14:50:07.831348    2272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 14:50:07.845582    2272 cni.go:84] Creating CNI manager for ""
	I0912 14:50:07.845596    2272 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:50:07.845611    2272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 14:50:07.845629    2272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-627000 NodeName:ingress-addon-legacy-627000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 14:50:07.845755    2272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-627000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 14:50:07.845823    2272 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-627000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-627000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 14:50:07.845895    2272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0912 14:50:07.850747    2272 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 14:50:07.850790    2272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 14:50:07.854441    2272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0912 14:50:07.860815    2272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0912 14:50:07.866822    2272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0912 14:50:07.872337    2272 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0912 14:50:07.873601    2272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 14:50:07.877530    2272 certs.go:56] Setting up /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000 for IP: 192.168.105.6
	I0912 14:50:07.877540    2272 certs.go:190] acquiring lock for shared ca certs: {Name:mk62fa2aa67693071dd0720b8deb8309ed3c8567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:07.877677    2272 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key
	I0912 14:50:07.877718    2272 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key
	I0912 14:50:07.877747    2272 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key
	I0912 14:50:07.877755    2272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt with IP's: []
	I0912 14:50:07.961550    2272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt ...
	I0912 14:50:07.961554    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: {Name:mka6c7867bbbffbb721a57ab22ca4acfd0d71d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:07.961785    2272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key ...
	I0912 14:50:07.961789    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key: {Name:mkec7386a86a1378cd2b74b05b155566ed7ec6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:07.961899    2272 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key.b354f644
	I0912 14:50:07.961907    2272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 14:50:07.998075    2272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt.b354f644 ...
	I0912 14:50:07.998078    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt.b354f644: {Name:mk922f977a86238938968f8b88a7a54affa4f8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:07.998212    2272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key.b354f644 ...
	I0912 14:50:07.998215    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key.b354f644: {Name:mk082059462c8b2bec7ca9b0a2e97028ba32a75c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:07.998322    2272 certs.go:337] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt
	I0912 14:50:07.998417    2272 certs.go:341] copying /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key
	I0912 14:50:07.998497    2272 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.key
	I0912 14:50:07.998504    2272 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.crt with IP's: []
	I0912 14:50:08.139366    2272 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.crt ...
	I0912 14:50:08.139371    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.crt: {Name:mk26f56a4e9cde9c13e201935a01e74ed7847595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:08.139524    2272 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.key ...
	I0912 14:50:08.139527    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.key: {Name:mka986b359dd544b03025cd7681c881583c79f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:08.139640    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 14:50:08.139658    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 14:50:08.139673    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 14:50:08.139685    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 14:50:08.139698    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 14:50:08.139717    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 14:50:08.139730    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 14:50:08.139741    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 14:50:08.139827    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470.pem (1338 bytes)
	W0912 14:50:08.139856    2272 certs.go:433] ignoring /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470_empty.pem, impossibly tiny 0 bytes
	I0912 14:50:08.139865    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 14:50:08.139890    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem (1082 bytes)
	I0912 14:50:08.139912    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem (1123 bytes)
	I0912 14:50:08.139940    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/certs/key.pem (1679 bytes)
	I0912 14:50:08.140003    2272 certs.go:437] found cert: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem (1708 bytes)
	I0912 14:50:08.140024    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470.pem -> /usr/share/ca-certificates/1470.pem
	I0912 14:50:08.140035    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem -> /usr/share/ca-certificates/14702.pem
	I0912 14:50:08.140046    2272 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:50:08.140477    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 14:50:08.148760    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 14:50:08.155739    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 14:50:08.162400    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 14:50:08.169662    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 14:50:08.177079    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 14:50:08.184268    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 14:50:08.191087    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 14:50:08.197814    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/1470.pem --> /usr/share/ca-certificates/1470.pem (1338 bytes)
	I0912 14:50:08.205086    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/ssl/certs/14702.pem --> /usr/share/ca-certificates/14702.pem (1708 bytes)
	I0912 14:50:08.212046    2272 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 14:50:08.218747    2272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 14:50:08.223670    2272 ssh_runner.go:195] Run: openssl version
	I0912 14:50:08.225645    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 14:50:08.229213    2272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:50:08.230988    2272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:50:08.231005    2272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 14:50:08.232778    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 14:50:08.235901    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470.pem && ln -fs /usr/share/ca-certificates/1470.pem /etc/ssl/certs/1470.pem"
	I0912 14:50:08.238847    2272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470.pem
	I0912 14:50:08.240270    2272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:45 /usr/share/ca-certificates/1470.pem
	I0912 14:50:08.240292    2272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470.pem
	I0912 14:50:08.242059    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1470.pem /etc/ssl/certs/51391683.0"
	I0912 14:50:08.245474    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14702.pem && ln -fs /usr/share/ca-certificates/14702.pem /etc/ssl/certs/14702.pem"
	I0912 14:50:08.248990    2272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14702.pem
	I0912 14:50:08.250576    2272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:45 /usr/share/ca-certificates/14702.pem
	I0912 14:50:08.250593    2272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14702.pem
	I0912 14:50:08.252442    2272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14702.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 14:50:08.255462    2272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 14:50:08.256741    2272 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 14:50:08.256773    2272 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-627000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-627000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:50:08.256842    2272 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 14:50:08.262329    2272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 14:50:08.265651    2272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 14:50:08.268923    2272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 14:50:08.271731    2272 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 14:50:08.271750    2272 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0912 14:50:08.301393    2272 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0912 14:50:08.301437    2272 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 14:50:08.383707    2272 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 14:50:08.383766    2272 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 14:50:08.383826    2272 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 14:50:08.429568    2272 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 14:50:08.430230    2272 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 14:50:08.430292    2272 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 14:50:08.524964    2272 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 14:50:08.534197    2272 out.go:204]   - Generating certificates and keys ...
	I0912 14:50:08.534239    2272 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 14:50:08.534283    2272 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 14:50:08.596072    2272 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 14:50:08.760130    2272 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 14:50:09.060501    2272 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 14:50:09.097082    2272 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 14:50:09.202415    2272 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 14:50:09.202492    2272 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-627000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0912 14:50:09.318692    2272 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 14:50:09.318758    2272 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-627000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0912 14:50:09.455723    2272 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 14:50:09.522778    2272 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 14:50:09.575063    2272 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 14:50:09.575090    2272 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 14:50:09.621361    2272 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 14:50:09.894382    2272 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 14:50:09.986179    2272 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 14:50:10.066034    2272 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 14:50:10.066299    2272 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 14:50:10.075856    2272 out.go:204]   - Booting up control plane ...
	I0912 14:50:10.075918    2272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 14:50:10.075960    2272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 14:50:10.076009    2272 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 14:50:10.076052    2272 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 14:50:10.076133    2272 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 14:50:21.076581    2272 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.003684 seconds
	I0912 14:50:21.076779    2272 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 14:50:21.088685    2272 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 14:50:21.620028    2272 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 14:50:21.620243    2272 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-627000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0912 14:50:22.131594    2272 kubeadm.go:322] [bootstrap-token] Using token: we1n6j.3gu1u5f8i6j45wfq
	I0912 14:50:22.135744    2272 out.go:204]   - Configuring RBAC rules ...
	I0912 14:50:22.135882    2272 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 14:50:22.141392    2272 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 14:50:22.152499    2272 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 14:50:22.155239    2272 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 14:50:22.157701    2272 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 14:50:22.160265    2272 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 14:50:22.167031    2272 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 14:50:22.363174    2272 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 14:50:22.543142    2272 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 14:50:22.543822    2272 kubeadm.go:322] 
	I0912 14:50:22.543862    2272 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 14:50:22.543873    2272 kubeadm.go:322] 
	I0912 14:50:22.543943    2272 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 14:50:22.543949    2272 kubeadm.go:322] 
	I0912 14:50:22.544000    2272 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 14:50:22.544046    2272 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 14:50:22.544087    2272 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 14:50:22.544093    2272 kubeadm.go:322] 
	I0912 14:50:22.544136    2272 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 14:50:22.544218    2272 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 14:50:22.544276    2272 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 14:50:22.544281    2272 kubeadm.go:322] 
	I0912 14:50:22.544344    2272 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 14:50:22.544407    2272 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 14:50:22.544411    2272 kubeadm.go:322] 
	I0912 14:50:22.544467    2272 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token we1n6j.3gu1u5f8i6j45wfq \
	I0912 14:50:22.544549    2272 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 \
	I0912 14:50:22.544567    2272 kubeadm.go:322]     --control-plane 
	I0912 14:50:22.544574    2272 kubeadm.go:322] 
	I0912 14:50:22.544646    2272 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 14:50:22.544652    2272 kubeadm.go:322] 
	I0912 14:50:22.544708    2272 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token we1n6j.3gu1u5f8i6j45wfq \
	I0912 14:50:22.544798    2272 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e282c167e7eeeb67fd4ecdd8b7cd7118f3d3f8a2efd76b40b6ed9b18bf47a7d9 
	I0912 14:50:22.544954    2272 kubeadm.go:322] W0912 21:50:08.284517    1416 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0912 14:50:22.545089    2272 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0912 14:50:22.545175    2272 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0912 14:50:22.545256    2272 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 14:50:22.545343    2272 kubeadm.go:322] W0912 21:50:10.053682    1416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0912 14:50:22.545443    2272 kubeadm.go:322] W0912 21:50:10.054481    1416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0912 14:50:22.545456    2272 cni.go:84] Creating CNI manager for ""
	I0912 14:50:22.545466    2272 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:50:22.545479    2272 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 14:50:22.545573    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=ingress-addon-legacy-627000 minikube.k8s.io/updated_at=2023_09_12T14_50_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:22.545578    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:22.559853    2272 ops.go:34] apiserver oom_adj: -16
	I0912 14:50:22.612135    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:22.648075    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:23.213849    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:23.713645    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:24.213697    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:24.713838    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:25.213770    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:25.713829    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:26.213873    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:26.713828    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:27.213811    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:27.713779    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:28.213849    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:28.713734    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:29.213852    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:29.713897    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:30.213757    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:30.713710    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:31.213745    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:31.713692    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:32.213775    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:32.713626    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:33.213639    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:33.713670    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:34.213626    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:34.713423    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:35.213663    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:35.713678    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:36.213684    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:36.713687    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:37.213565    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:37.713432    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:38.213336    2272 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 14:50:38.261911    2272 kubeadm.go:1081] duration metric: took 15.716719708s to wait for elevateKubeSystemPrivileges.
	I0912 14:50:38.261926    2272 kubeadm.go:406] StartCluster complete in 30.005755292s
	I0912 14:50:38.261939    2272 settings.go:142] acquiring lock: {Name:mke2a1c2b91a69fc9538d2ab9217887ccaa535ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:38.262020    2272 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:50:38.262365    2272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/kubeconfig: {Name:mk92e8fca531d1e53b216ab5c46209b819337697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:50:38.262570    2272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 14:50:38.262620    2272 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 14:50:38.262659    2272 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-627000"
	I0912 14:50:38.262670    2272 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-627000"
	I0912 14:50:38.262682    2272 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-627000"
	I0912 14:50:38.262688    2272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-627000"
	I0912 14:50:38.262694    2272 host.go:66] Checking if "ingress-addon-legacy-627000" exists ...
	I0912 14:50:38.263021    2272 kapi.go:59] client config for ingress-addon-legacy-627000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key", CAFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10399ff10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 14:50:38.263280    2272 config.go:182] Loaded profile config "ingress-addon-legacy-627000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0912 14:50:38.263595    2272 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 14:50:38.264146    2272 kapi.go:59] client config for ingress-addon-legacy-627000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key", CAFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10399ff10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 14:50:38.267673    2272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:50:38.271651    2272 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:50:38.271660    2272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 14:50:38.271668    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:50:38.277536    2272 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-627000"
	I0912 14:50:38.277555    2272 host.go:66] Checking if "ingress-addon-legacy-627000" exists ...
	I0912 14:50:38.278262    2272 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 14:50:38.278268    2272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 14:50:38.278275    2272 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/ingress-addon-legacy-627000/id_rsa Username:docker}
	I0912 14:50:38.297870    2272 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-627000" context rescaled to 1 replicas
	I0912 14:50:38.297888    2272 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:50:38.303559    2272 out.go:177] * Verifying Kubernetes components...
	I0912 14:50:38.314567    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 14:50:38.317173    2272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 14:50:38.381163    2272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 14:50:38.381438    2272 kapi.go:59] client config for ingress-addon-legacy-627000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.key", CAFile:"/Users/jenkins/minikube-integration/17194-1051/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10399ff10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 14:50:38.381573    2272 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-627000" to be "Ready" ...
	I0912 14:50:38.391170    2272 node_ready.go:49] node "ingress-addon-legacy-627000" has status "Ready":"True"
	I0912 14:50:38.391179    2272 node_ready.go:38] duration metric: took 9.595667ms waiting for node "ingress-addon-legacy-627000" to be "Ready" ...
	I0912 14:50:38.391184    2272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 14:50:38.399910    2272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-dzrch" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:38.413137    2272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 14:50:38.674687    2272 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0912 14:50:38.686759    2272 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0912 14:50:38.694644    2272 addons.go:502] enable addons completed in 432.061042ms: enabled=[storage-provisioner default-storageclass]
	I0912 14:50:40.420552    2272 pod_ready.go:102] pod "coredns-66bff467f8-dzrch" in "kube-system" namespace has status "Ready":"False"
	I0912 14:50:42.429521    2272 pod_ready.go:102] pod "coredns-66bff467f8-dzrch" in "kube-system" namespace has status "Ready":"False"
	I0912 14:50:44.433388    2272 pod_ready.go:102] pod "coredns-66bff467f8-dzrch" in "kube-system" namespace has status "Ready":"False"
	I0912 14:50:46.929971    2272 pod_ready.go:102] pod "coredns-66bff467f8-dzrch" in "kube-system" namespace has status "Ready":"False"
	I0912 14:50:47.920709    2272 pod_ready.go:92] pod "coredns-66bff467f8-dzrch" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:47.920721    2272 pod_ready.go:81] duration metric: took 9.520987542s waiting for pod "coredns-66bff467f8-dzrch" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:47.920727    2272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-grtx8" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.433407    2272 pod_ready.go:97] error getting pod "coredns-66bff467f8-grtx8" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-grtx8" not found
	I0912 14:50:49.433461    2272 pod_ready.go:81] duration metric: took 1.512753833s waiting for pod "coredns-66bff467f8-grtx8" in "kube-system" namespace to be "Ready" ...
	E0912 14:50:49.433488    2272 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-grtx8" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-grtx8" not found
	I0912 14:50:49.433505    2272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.442691    2272 pod_ready.go:92] pod "etcd-ingress-addon-legacy-627000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:49.442714    2272 pod_ready.go:81] duration metric: took 9.195333ms waiting for pod "etcd-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.442731    2272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.449823    2272 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-627000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:49.449843    2272 pod_ready.go:81] duration metric: took 7.101417ms waiting for pod "kube-apiserver-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.449856    2272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.456421    2272 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-627000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:49.456444    2272 pod_ready.go:81] duration metric: took 6.577291ms waiting for pod "kube-controller-manager-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.456459    2272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ln6fc" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.519094    2272 request.go:629] Waited for 62.5355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ln6fc
	I0912 14:50:49.719166    2272 request.go:629] Waited for 195.20025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-627000
	I0912 14:50:49.726149    2272 pod_ready.go:92] pod "kube-proxy-ln6fc" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:49.726183    2272 pod_ready.go:81] duration metric: took 269.715875ms waiting for pod "kube-proxy-ln6fc" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.726203    2272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:49.919077    2272 request.go:629] Waited for 192.7735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-627000
	I0912 14:50:50.119093    2272 request.go:629] Waited for 192.346875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-627000
	I0912 14:50:50.123603    2272 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-627000" in "kube-system" namespace has status "Ready":"True"
	I0912 14:50:50.123635    2272 pod_ready.go:81] duration metric: took 397.415791ms waiting for pod "kube-scheduler-ingress-addon-legacy-627000" in "kube-system" namespace to be "Ready" ...
	I0912 14:50:50.123649    2272 pod_ready.go:38] duration metric: took 11.732689541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 14:50:50.123680    2272 api_server.go:52] waiting for apiserver process to appear ...
	I0912 14:50:50.123822    2272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 14:50:50.143616    2272 api_server.go:72] duration metric: took 11.845931292s to wait for apiserver process to appear ...
	I0912 14:50:50.143648    2272 api_server.go:88] waiting for apiserver healthz status ...
	I0912 14:50:50.143676    2272 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0912 14:50:50.155825    2272 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0912 14:50:50.157196    2272 api_server.go:141] control plane version: v1.18.20
	I0912 14:50:50.157215    2272 api_server.go:131] duration metric: took 13.558208ms to wait for apiserver health ...
	I0912 14:50:50.157224    2272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 14:50:50.319093    2272 request.go:629] Waited for 161.779375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0912 14:50:50.333092    2272 system_pods.go:59] 7 kube-system pods found
	I0912 14:50:50.333130    2272 system_pods.go:61] "coredns-66bff467f8-dzrch" [0026f813-2cad-49af-ad31-69d784005a61] Running
	I0912 14:50:50.333141    2272 system_pods.go:61] "etcd-ingress-addon-legacy-627000" [bfb07bae-9339-4ba4-9cb3-5556d8e53580] Running
	I0912 14:50:50.333150    2272 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-627000" [71c503a3-626d-4b5e-8ece-bb3f7d606f60] Running
	I0912 14:50:50.333160    2272 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-627000" [bc20c754-e323-4c13-9144-2683db180fa7] Running
	I0912 14:50:50.333193    2272 system_pods.go:61] "kube-proxy-ln6fc" [919a73df-a883-4bae-a6d5-8f78c546dccb] Running
	I0912 14:50:50.333209    2272 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-627000" [3fe1fee7-6336-47ba-9b7c-2d672f8746be] Running
	I0912 14:50:50.333232    2272 system_pods.go:61] "storage-provisioner" [58dabaa8-b6ff-4dfd-ba4f-5bab104781d3] Running
	I0912 14:50:50.333259    2272 system_pods.go:74] duration metric: took 176.011584ms to wait for pod list to return data ...
	I0912 14:50:50.333277    2272 default_sa.go:34] waiting for default service account to be created ...
	I0912 14:50:50.519088    2272 request.go:629] Waited for 185.695125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0912 14:50:50.525598    2272 default_sa.go:45] found service account: "default"
	I0912 14:50:50.525639    2272 default_sa.go:55] duration metric: took 192.348292ms for default service account to be created ...
	I0912 14:50:50.525659    2272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 14:50:50.718533    2272 request.go:629] Waited for 192.7385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0912 14:50:50.731661    2272 system_pods.go:86] 7 kube-system pods found
	I0912 14:50:50.731700    2272 system_pods.go:89] "coredns-66bff467f8-dzrch" [0026f813-2cad-49af-ad31-69d784005a61] Running
	I0912 14:50:50.731713    2272 system_pods.go:89] "etcd-ingress-addon-legacy-627000" [bfb07bae-9339-4ba4-9cb3-5556d8e53580] Running
	I0912 14:50:50.731735    2272 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-627000" [71c503a3-626d-4b5e-8ece-bb3f7d606f60] Running
	I0912 14:50:50.731748    2272 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-627000" [bc20c754-e323-4c13-9144-2683db180fa7] Running
	I0912 14:50:50.731758    2272 system_pods.go:89] "kube-proxy-ln6fc" [919a73df-a883-4bae-a6d5-8f78c546dccb] Running
	I0912 14:50:50.731771    2272 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-627000" [3fe1fee7-6336-47ba-9b7c-2d672f8746be] Running
	I0912 14:50:50.731780    2272 system_pods.go:89] "storage-provisioner" [58dabaa8-b6ff-4dfd-ba4f-5bab104781d3] Running
	I0912 14:50:50.731797    2272 system_pods.go:126] duration metric: took 206.13175ms to wait for k8s-apps to be running ...
	I0912 14:50:50.731813    2272 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 14:50:50.732081    2272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 14:50:50.748113    2272 system_svc.go:56] duration metric: took 16.298167ms WaitForService to wait for kubelet.
	I0912 14:50:50.748144    2272 kubeadm.go:581] duration metric: took 12.450490084s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 14:50:50.748168    2272 node_conditions.go:102] verifying NodePressure condition ...
	I0912 14:50:50.919125    2272 request.go:629] Waited for 170.835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0912 14:50:50.927648    2272 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0912 14:50:50.927699    2272 node_conditions.go:123] node cpu capacity is 2
	I0912 14:50:50.927732    2272 node_conditions.go:105] duration metric: took 179.557666ms to run NodePressure ...
	I0912 14:50:50.927767    2272 start.go:228] waiting for startup goroutines ...
	I0912 14:50:50.927787    2272 start.go:233] waiting for cluster config update ...
	I0912 14:50:50.927814    2272 start.go:242] writing updated cluster config ...
	I0912 14:50:50.929241    2272 ssh_runner.go:195] Run: rm -f paused
	I0912 14:50:50.993824    2272 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0912 14:50:50.998292    2272 out.go:177] 
	W0912 14:50:51.001125    2272 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0912 14:50:51.005178    2272 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0912 14:50:51.013201    2272 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-627000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-12 21:49:55 UTC, ends at Tue 2023-09-12 21:52:04 UTC. --
	Sep 12 21:51:39 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:39.636272007Z" level=info msg="shim disconnected" id=d6b441dc5cde406dd3b2505abfd22baeb27a279872ca2e81012bb2f209507959 namespace=moby
	Sep 12 21:51:39 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:39.636308590Z" level=warning msg="cleaning up after shim disconnected" id=d6b441dc5cde406dd3b2505abfd22baeb27a279872ca2e81012bb2f209507959 namespace=moby
	Sep 12 21:51:39 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:39.636314715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:51:50 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:50.829360364Z" level=info msg="shim disconnected" id=3dede064c21ebd1cd5b74d1cc9d7640bfa3e0644b063f7f51b136141e0a9cb42 namespace=moby
	Sep 12 21:51:50 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:50.829428822Z" level=warning msg="cleaning up after shim disconnected" id=3dede064c21ebd1cd5b74d1cc9d7640bfa3e0644b063f7f51b136141e0a9cb42 namespace=moby
	Sep 12 21:51:50 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:50.829437656Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:51:50 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:50.829673656Z" level=info msg="ignoring event" container=3dede064c21ebd1cd5b74d1cc9d7640bfa3e0644b063f7f51b136141e0a9cb42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.841048963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.841420006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.841444339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.841452922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:55.888103659Z" level=info msg="ignoring event" container=67c898aba95fe4429d7d363f784a08de7d54cea0ae8253f7778156cb048ea198 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.888126117Z" level=info msg="shim disconnected" id=67c898aba95fe4429d7d363f784a08de7d54cea0ae8253f7778156cb048ea198 namespace=moby
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.888150784Z" level=warning msg="cleaning up after shim disconnected" id=67c898aba95fe4429d7d363f784a08de7d54cea0ae8253f7778156cb048ea198 namespace=moby
	Sep 12 21:51:55 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:55.888154992Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:59.303783446Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=6e9228be215d1ba1be3dedc45380957f0ebdfaaab5fedb16f9339b4738235aab
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:59.314052127Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=6e9228be215d1ba1be3dedc45380957f0ebdfaaab5fedb16f9339b4738235aab
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:59.405543834Z" level=info msg="ignoring event" container=6e9228be215d1ba1be3dedc45380957f0ebdfaaab5fedb16f9339b4738235aab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.407277545Z" level=info msg="shim disconnected" id=6e9228be215d1ba1be3dedc45380957f0ebdfaaab5fedb16f9339b4738235aab namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.407692921Z" level=warning msg="cleaning up after shim disconnected" id=6e9228be215d1ba1be3dedc45380957f0ebdfaaab5fedb16f9339b4738235aab namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.407816462Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.450328937Z" level=info msg="shim disconnected" id=678c0bf5d53f8e40316084f195124d72560aa41c821ee6c1243544b4e8867d51 namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.450476229Z" level=warning msg="cleaning up after shim disconnected" id=678c0bf5d53f8e40316084f195124d72560aa41c821ee6c1243544b4e8867d51 namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1100]: time="2023-09-12T21:51:59.450774771Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 12 21:51:59 ingress-addon-legacy-627000 dockerd[1093]: time="2023-09-12T21:51:59.450930479Z" level=info msg="ignoring event" container=678c0bf5d53f8e40316084f195124d72560aa41c821ee6c1243544b4e8867d51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	67c898aba95fe       a39a074194753                                                                                                      9 seconds ago        Exited              hello-world-app           2                   5443e027cdcff
	fd09dff3e79ce       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      36 seconds ago       Running             nginx                     0                   5e758bea82940
	6e9228be215d1       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   55 seconds ago       Exited              controller                0                   678c0bf5d53f8
	627b13df24d2e       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   1f0fbd30adcd8
	e09a3afb4975f       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   80f82470313a7
	2c3a7623ed63f       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   847250a2b64bc
	acc6978120e0d       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   2a9ac927151ba
	d3ea31058dc8b       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   6cc50ec28b06d
	4275fe7225361       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   419362f7f7bbf
	5b367f672bc82       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   5abb775e9f548
	e5e1c82f44d88       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   8ab934ab7fc72
	8870977c50a96       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   7137be87e0907
	
	* 
	* ==> coredns [acc6978120e0] <==
	* [INFO] 172.17.0.1:7031 - 43162 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009708s
	[INFO] 172.17.0.1:34419 - 44585 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030417s
	[INFO] 172.17.0.1:7031 - 62572 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008208s
	[INFO] 172.17.0.1:34419 - 64616 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002575s
	[INFO] 172.17.0.1:7031 - 71 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000007916s
	[INFO] 172.17.0.1:34419 - 43472 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023417s
	[INFO] 172.17.0.1:7031 - 53299 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008291s
	[INFO] 172.17.0.1:34419 - 52970 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023083s
	[INFO] 172.17.0.1:7031 - 9609 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007875s
	[INFO] 172.17.0.1:7031 - 53633 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032209s
	[INFO] 172.17.0.1:34419 - 49066 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00001175s
	[INFO] 172.17.0.1:22376 - 2939 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034167s
	[INFO] 172.17.0.1:1927 - 52687 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033833s
	[INFO] 172.17.0.1:22376 - 37359 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045417s
	[INFO] 172.17.0.1:22376 - 2207 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040583s
	[INFO] 172.17.0.1:1927 - 59439 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00002025s
	[INFO] 172.17.0.1:22376 - 10864 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016917s
	[INFO] 172.17.0.1:1927 - 31285 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013666s
	[INFO] 172.17.0.1:22376 - 10325 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013625s
	[INFO] 172.17.0.1:1927 - 51302 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047292s
	[INFO] 172.17.0.1:1927 - 58277 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019s
	[INFO] 172.17.0.1:1927 - 5319 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019833s
	[INFO] 172.17.0.1:22376 - 17061 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018917s
	[INFO] 172.17.0.1:22376 - 65382 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000021584s
	[INFO] 172.17.0.1:1927 - 3550 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013583s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-627000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-627000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=ingress-addon-legacy-627000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T14_50_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 21:50:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-627000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 21:51:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 21:51:58 +0000   Tue, 12 Sep 2023 21:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 21:51:58 +0000   Tue, 12 Sep 2023 21:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 21:51:58 +0000   Tue, 12 Sep 2023 21:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 21:51:58 +0000   Tue, 12 Sep 2023 21:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-627000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a65e6d9c6904b4a95e383fe7382a6fd
	  System UUID:                8a65e6d9c6904b4a95e383fe7382a6fd
	  Boot ID:                    40b6303c-24c1-4365-a084-6cec5225deb1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-r4pqg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-dzrch                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     86s
	  kube-system                 etcd-ingress-addon-legacy-627000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-ingress-addon-legacy-627000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-627000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-ln6fc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-627000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 96s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s   kubelet     Node ingress-addon-legacy-627000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s   kubelet     Node ingress-addon-legacy-627000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s   kubelet     Node ingress-addon-legacy-627000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                96s   kubelet     Node ingress-addon-legacy-627000 status is now: NodeReady
	  Normal  Starting                 86s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep12 21:49] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.667862] EINJ: EINJ table not found.
	[  +0.525925] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043870] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000833] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.129606] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +0.076442] systemd-fstab-generator[496]: Ignoring "noauto" for root device
	[  +0.451888] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.180983] systemd-fstab-generator[840]: Ignoring "noauto" for root device
	[  +0.076363] systemd-fstab-generator[851]: Ignoring "noauto" for root device
	[  +0.085675] systemd-fstab-generator[864]: Ignoring "noauto" for root device
	[Sep12 21:50] systemd-fstab-generator[1066]: Ignoring "noauto" for root device
	[  +1.471234] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.614185] systemd-fstab-generator[1534]: Ignoring "noauto" for root device
	[  +8.556214] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.093012] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.089079] systemd-fstab-generator[2632]: Ignoring "noauto" for root device
	[ +16.410386] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.040521] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.069627] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep12 21:51] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.443995] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [8870977c50a9] <==
	* raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/12 21:50:18 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-12 21:50:18.083179 W | auth: simple token is not cryptographically signed
	2023-09-12 21:50:18.084076 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-12 21:50:18.085742 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-12 21:50:18.086021 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-12 21:50:18.086420 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-12 21:50:18.086517 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-12 21:50:18.086657 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/12 21:50:18 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/12 21:50:18 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-12 21:50:18.182087 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-12 21:50:18.182379 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-12 21:50:18.182418 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-12 21:50:18.182461 I | etcdserver: published {Name:ingress-addon-legacy-627000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-12 21:50:18.182474 I | embed: ready to serve client requests
	2023-09-12 21:50:18.182527 I | embed: ready to serve client requests
	2023-09-12 21:50:18.183197 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-12 21:50:18.187067 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  21:52:04 up 2 min,  0 users,  load average: 0.51, 0.30, 0.11
	Linux ingress-addon-legacy-627000 5.10.57 #1 SMP PREEMPT Mon Sep 11 23:30:27 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4275fe722536] <==
	* I0912 21:50:19.714515       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0912 21:50:19.749296       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0912 21:50:19.799393       1 cache.go:39] Caches are synced for autoregister controller
	I0912 21:50:19.799450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 21:50:19.799477       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0912 21:50:19.799508       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 21:50:19.815057       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0912 21:50:20.696191       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0912 21:50:20.696335       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0912 21:50:20.708850       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0912 21:50:20.714310       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0912 21:50:20.714344       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0912 21:50:20.845131       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 21:50:20.855603       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0912 21:50:20.961542       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0912 21:50:20.961966       1 controller.go:609] quota admission added evaluator for: endpoints
	I0912 21:50:20.963698       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:50:21.997871       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0912 21:50:22.342695       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0912 21:50:22.513929       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0912 21:50:28.739092       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 21:50:37.976394       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0912 21:50:38.026339       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0912 21:50:51.290271       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0912 21:51:25.391050       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [5b367f672bc8] <==
	* I0912 21:50:38.012383       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0912 21:50:38.012521       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0912 21:50:38.012695       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-627000", UID:"da749be6-cb4d-46e9-8466-3cbe6289371e", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-627000 event: Registered Node ingress-addon-legacy-627000 in Controller
	I0912 21:50:38.024959       1 shared_informer.go:230] Caches are synced for deployment 
	I0912 21:50:38.027433       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1e5202a0-a2dd-463f-baf0-1bf8d71beef6", APIVersion:"apps/v1", ResourceVersion:"186", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0912 21:50:38.028910       1 shared_informer.go:230] Caches are synced for job 
	I0912 21:50:38.031809       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f760550f-5da2-42bb-ab83-5509728daca8", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-grtx8
	I0912 21:50:38.035184       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f760550f-5da2-42bb-ab83-5509728daca8", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-dzrch
	I0912 21:50:38.044243       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0912 21:50:38.044330       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0912 21:50:38.047726       1 shared_informer.go:230] Caches are synced for stateful set 
	I0912 21:50:38.060359       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0912 21:50:38.061164       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0912 21:50:38.061196       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0912 21:50:38.296564       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1e5202a0-a2dd-463f-baf0-1bf8d71beef6", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0912 21:50:38.318110       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f760550f-5da2-42bb-ab83-5509728daca8", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-grtx8
	I0912 21:50:51.285683       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"fe7a09f0-8043-4221-b755-a090847ef98d", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0912 21:50:51.298268       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1537f589-46e0-4829-9ecd-ab99f8f46567", APIVersion:"batch/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8t7t8
	I0912 21:50:51.298286       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1639884e-008b-458f-9ef7-959322e2a32d", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-g8zmp
	I0912 21:50:51.322424       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"91335dba-f99b-4e81-ac37-ec412c86b615", APIVersion:"batch/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-d2s6c
	I0912 21:50:56.109592       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1537f589-46e0-4829-9ecd-ab99f8f46567", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0912 21:50:57.099901       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"91335dba-f99b-4e81-ac37-ec412c86b615", APIVersion:"batch/v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0912 21:51:34.678889       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"188d0c73-b499-4aa3-8c88-1d797d29ca87", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0912 21:51:34.688984       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3c8013a9-21ee-42dd-a564-8d63ee7af487", APIVersion:"apps/v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-r4pqg
	E0912 21:52:02.087019       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-ffqk4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [d3ea31058dc8] <==
	* W0912 21:50:38.601292       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0912 21:50:38.606834       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0912 21:50:38.606865       1 server_others.go:186] Using iptables Proxier.
	I0912 21:50:38.607037       1 server.go:583] Version: v1.18.20
	I0912 21:50:38.614631       1 config.go:133] Starting endpoints config controller
	I0912 21:50:38.614742       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0912 21:50:38.614971       1 config.go:315] Starting service config controller
	I0912 21:50:38.616694       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0912 21:50:38.714968       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0912 21:50:38.717311       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e5e1c82f44d8] <==
	* W0912 21:50:19.723026       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:50:19.723032       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:50:19.742870       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0912 21:50:19.742907       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0912 21:50:19.743778       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:50:19.743793       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:50:19.745726       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0912 21:50:19.745786       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0912 21:50:19.746537       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:50:19.746945       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:50:19.747154       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:50:19.747206       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:50:19.748005       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:50:19.748046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:50:19.748073       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:50:19.748098       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:50:19.748145       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:50:19.748170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:50:19.748699       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:50:19.748715       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:50:20.638517       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:50:20.672668       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:50:20.692097       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:50:20.776957       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0912 21:50:23.245235       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 21:49:55 UTC, ends at Tue 2023-09-12 21:52:04 UTC. --
	Sep 12 21:51:41 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:41.597507    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d6b441dc5cde406dd3b2505abfd22baeb27a279872ca2e81012bb2f209507959
	Sep 12 21:51:41 ingress-addon-legacy-627000 kubelet[2638]: E0912 21:51:41.599093    2638 pod_workers.go:191] Error syncing pod bfee859a-1d1f-4e01-84a2-ed0c448351b3 ("hello-world-app-5f5d8b66bb-r4pqg_default(bfee859a-1d1f-4e01-84a2-ed0c448351b3)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-r4pqg_default(bfee859a-1d1f-4e01-84a2-ed0c448351b3)"
	Sep 12 21:51:44 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:44.788429    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d90ecd288ccb240ae23e3b522bc6d3121f746afd94188a23496499004377df96
	Sep 12 21:51:44 ingress-addon-legacy-627000 kubelet[2638]: E0912 21:51:44.788881    2638 pod_workers.go:191] Error syncing pod 7b33dec4-5796-4b01-bbdd-30c3c83cc30c ("kube-ingress-dns-minikube_kube-system(7b33dec4-5796-4b01-bbdd-30c3c83cc30c)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(7b33dec4-5796-4b01-bbdd-30c3c83cc30c)"
	Sep 12 21:51:50 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:50.072670    2638 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-g8xdl" (UniqueName: "kubernetes.io/secret/7b33dec4-5796-4b01-bbdd-30c3c83cc30c-minikube-ingress-dns-token-g8xdl") pod "7b33dec4-5796-4b01-bbdd-30c3c83cc30c" (UID: "7b33dec4-5796-4b01-bbdd-30c3c83cc30c")
	Sep 12 21:51:50 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:50.074697    2638 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b33dec4-5796-4b01-bbdd-30c3c83cc30c-minikube-ingress-dns-token-g8xdl" (OuterVolumeSpecName: "minikube-ingress-dns-token-g8xdl") pod "7b33dec4-5796-4b01-bbdd-30c3c83cc30c" (UID: "7b33dec4-5796-4b01-bbdd-30c3c83cc30c"). InnerVolumeSpecName "minikube-ingress-dns-token-g8xdl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:51:50 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:50.172906    2638 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-g8xdl" (UniqueName: "kubernetes.io/secret/7b33dec4-5796-4b01-bbdd-30c3c83cc30c-minikube-ingress-dns-token-g8xdl") on node "ingress-addon-legacy-627000" DevicePath ""
	Sep 12 21:51:51 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:51.754440    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d90ecd288ccb240ae23e3b522bc6d3121f746afd94188a23496499004377df96
	Sep 12 21:51:55 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:55.783464    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d6b441dc5cde406dd3b2505abfd22baeb27a279872ca2e81012bb2f209507959
	Sep 12 21:51:55 ingress-addon-legacy-627000 kubelet[2638]: W0912 21:51:55.901385    2638 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podbfee859a-1d1f-4e01-84a2-ed0c448351b3/67c898aba95fe4429d7d363f784a08de7d54cea0ae8253f7778156cb048ea198": none of the resources are being tracked.
	Sep 12 21:51:56 ingress-addon-legacy-627000 kubelet[2638]: W0912 21:51:56.840686    2638 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-r4pqg through plugin: invalid network status for
	Sep 12 21:51:56 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:56.844360    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d6b441dc5cde406dd3b2505abfd22baeb27a279872ca2e81012bb2f209507959
	Sep 12 21:51:56 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:51:56.844550    2638 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 67c898aba95fe4429d7d363f784a08de7d54cea0ae8253f7778156cb048ea198
	Sep 12 21:51:56 ingress-addon-legacy-627000 kubelet[2638]: E0912 21:51:56.844694    2638 pod_workers.go:191] Error syncing pod bfee859a-1d1f-4e01-84a2-ed0c448351b3 ("hello-world-app-5f5d8b66bb-r4pqg_default(bfee859a-1d1f-4e01-84a2-ed0c448351b3)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-r4pqg_default(bfee859a-1d1f-4e01-84a2-ed0c448351b3)"
	Sep 12 21:51:57 ingress-addon-legacy-627000 kubelet[2638]: E0912 21:51:57.286385    2638 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-g8zmp.17844543cbf22f1d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-g8zmp", UID:"64726f6d-2fcb-4650-981b-17cfc74edd77", APIVersion:"v1", ResourceVersion:"454", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-627000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138553f50fb8d1d, ext:94965945147, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138553f50fb8d1d, ext:94965945147, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-g8zmp.17844543cbf22f1d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 12 21:51:57 ingress-addon-legacy-627000 kubelet[2638]: E0912 21:51:57.300082    2638 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-g8zmp.17844543cbf22f1d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-g8zmp", UID:"64726f6d-2fcb-4650-981b-17cfc74edd77", APIVersion:"v1", ResourceVersion:"454", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-627000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138553f50fb8d1d, ext:94965945147, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138553f518e61b6, ext:94975567827, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-g8zmp.17844543cbf22f1d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 12 21:51:57 ingress-addon-legacy-627000 kubelet[2638]: W0912 21:51:57.861434    2638 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-r4pqg through plugin: invalid network status for
	Sep 12 21:51:59 ingress-addon-legacy-627000 kubelet[2638]: W0912 21:51:59.906054    2638 pod_container_deletor.go:77] Container "678c0bf5d53f8e40316084f195124d72560aa41c821ee6c1243544b4e8867d51" not found in pod's containers
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.489580    2638 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-webhook-cert") pod "64726f6d-2fcb-4650-981b-17cfc74edd77" (UID: "64726f6d-2fcb-4650-981b-17cfc74edd77")
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.489712    2638 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-pmstt" (UniqueName: "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-ingress-nginx-token-pmstt") pod "64726f6d-2fcb-4650-981b-17cfc74edd77" (UID: "64726f6d-2fcb-4650-981b-17cfc74edd77")
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.508304    2638 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "64726f6d-2fcb-4650-981b-17cfc74edd77" (UID: "64726f6d-2fcb-4650-981b-17cfc74edd77"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.508494    2638 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-ingress-nginx-token-pmstt" (OuterVolumeSpecName: "ingress-nginx-token-pmstt") pod "64726f6d-2fcb-4650-981b-17cfc74edd77" (UID: "64726f6d-2fcb-4650-981b-17cfc74edd77"). InnerVolumeSpecName "ingress-nginx-token-pmstt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.590073    2638 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-webhook-cert") on node "ingress-addon-legacy-627000" DevicePath ""
	Sep 12 21:52:01 ingress-addon-legacy-627000 kubelet[2638]: I0912 21:52:01.590269    2638 reconciler.go:319] Volume detached for volume "ingress-nginx-token-pmstt" (UniqueName: "kubernetes.io/secret/64726f6d-2fcb-4650-981b-17cfc74edd77-ingress-nginx-token-pmstt") on node "ingress-addon-legacy-627000" DevicePath ""
	Sep 12 21:52:02 ingress-addon-legacy-627000 kubelet[2638]: W0912 21:52:02.809036    2638 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/64726f6d-2fcb-4650-981b-17cfc74edd77/volumes" does not exist
	
	* 
	* ==> storage-provisioner [2c3a7623ed63] <==
	* I0912 21:50:41.093644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:50:41.098036       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:50:41.098057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:50:41.103298       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:50:41.103656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0005d49e-a83d-4c11-8c83-1decf70b2694", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-627000_5a420ed7-597c-499b-be65-f37ea1e39674 became leader
	I0912 21:50:41.103755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-627000_5a420ed7-597c-499b-be65-f37ea1e39674!
	I0912 21:50:41.204963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-627000_5a420ed7-597c-499b-be65-f37ea1e39674!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-627000 -n ingress-addon-legacy-627000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-627000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-601000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0912 14:54:18.697854    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-601000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.438651084s)

                                                
                                                
-- stdout --
	* [mount-start-1-601000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-601000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-601000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-601000 -n mount-start-1-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-601000 -n mount-start-1-601000: exit status 7 (68.558667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.51s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-914000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-914000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.022030417s)

                                                
                                                
-- stdout --
	* [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-914000 in cluster multinode-914000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-914000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:54:19.335446    2606 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:54:19.335802    2606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:54:19.335807    2606 out.go:309] Setting ErrFile to fd 2...
	I0912 14:54:19.335810    2606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:54:19.336001    2606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:54:19.337403    2606 out.go:303] Setting JSON to false
	I0912 14:54:19.352735    2606 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1433,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:54:19.352798    2606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:54:19.358136    2606 out.go:177] * [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:54:19.365128    2606 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:54:19.369038    2606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:54:19.365197    2606 notify.go:220] Checking for updates...
	I0912 14:54:19.375092    2606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:54:19.378112    2606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:54:19.381138    2606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:54:19.384136    2606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:54:19.387196    2606 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:54:19.391083    2606 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:54:19.397058    2606 start.go:298] selected driver: qemu2
	I0912 14:54:19.397063    2606 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:54:19.397069    2606 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:54:19.399022    2606 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:54:19.402069    2606 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:54:19.410293    2606 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:54:19.410319    2606 cni.go:84] Creating CNI manager for ""
	I0912 14:54:19.410323    2606 cni.go:136] 0 nodes found, recommending kindnet
	I0912 14:54:19.410333    2606 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 14:54:19.410341    2606 start_flags.go:321] config:
	{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0912 14:54:19.414815    2606 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:54:19.422094    2606 out.go:177] * Starting control plane node multinode-914000 in cluster multinode-914000
	I0912 14:54:19.426092    2606 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:54:19.426123    2606 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:54:19.426145    2606 cache.go:57] Caching tarball of preloaded images
	I0912 14:54:19.426222    2606 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:54:19.426229    2606 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:54:19.426493    2606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/multinode-914000/config.json ...
	I0912 14:54:19.426509    2606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/multinode-914000/config.json: {Name:mkc8961fe695178b8e9c0e2593812d5030d3d691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:54:19.426735    2606 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:54:19.426769    2606 start.go:369] acquired machines lock for "multinode-914000" in 27.792µs
	I0912 14:54:19.426783    2606 start.go:93] Provisioning new machine with config: &{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:54:19.426822    2606 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:54:19.435001    2606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 14:54:19.452183    2606 start.go:159] libmachine.API.Create for "multinode-914000" (driver="qemu2")
	I0912 14:54:19.452209    2606 client.go:168] LocalClient.Create starting
	I0912 14:54:19.452269    2606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:54:19.452303    2606 main.go:141] libmachine: Decoding PEM data...
	I0912 14:54:19.452319    2606 main.go:141] libmachine: Parsing certificate...
	I0912 14:54:19.452361    2606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:54:19.452385    2606 main.go:141] libmachine: Decoding PEM data...
	I0912 14:54:19.452393    2606 main.go:141] libmachine: Parsing certificate...
	I0912 14:54:19.452742    2606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:54:19.580262    2606 main.go:141] libmachine: Creating SSH key...
	I0912 14:54:19.689795    2606 main.go:141] libmachine: Creating Disk image...
	I0912 14:54:19.689800    2606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:54:19.689937    2606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:19.698617    2606 main.go:141] libmachine: STDOUT: 
	I0912 14:54:19.698632    2606 main.go:141] libmachine: STDERR: 
	I0912 14:54:19.698692    2606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2 +20000M
	I0912 14:54:19.705814    2606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:54:19.705828    2606 main.go:141] libmachine: STDERR: 
	I0912 14:54:19.705841    2606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:19.705857    2606 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:54:19.705884    2606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:12:8f:e1:97:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:19.707393    2606 main.go:141] libmachine: STDOUT: 
	I0912 14:54:19.707406    2606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:54:19.707424    2606 client.go:171] LocalClient.Create took 255.214583ms
	I0912 14:54:21.709592    2606 start.go:128] duration metric: createHost completed in 2.282792208s
	I0912 14:54:21.709657    2606 start.go:83] releasing machines lock for "multinode-914000", held for 2.282921208s
	W0912 14:54:21.709714    2606 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:54:21.716032    2606 out.go:177] * Deleting "multinode-914000" in qemu2 ...
	W0912 14:54:21.735212    2606 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:54:21.735242    2606 start.go:703] Will try again in 5 seconds ...
	I0912 14:54:26.737454    2606 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:54:26.737950    2606 start.go:369] acquired machines lock for "multinode-914000" in 384.458µs
	I0912 14:54:26.738088    2606 start.go:93] Provisioning new machine with config: &{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:54:26.738334    2606 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:54:26.748069    2606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 14:54:26.795209    2606 start.go:159] libmachine.API.Create for "multinode-914000" (driver="qemu2")
	I0912 14:54:26.795258    2606 client.go:168] LocalClient.Create starting
	I0912 14:54:26.795417    2606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:54:26.795494    2606 main.go:141] libmachine: Decoding PEM data...
	I0912 14:54:26.795512    2606 main.go:141] libmachine: Parsing certificate...
	I0912 14:54:26.795608    2606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:54:26.795646    2606 main.go:141] libmachine: Decoding PEM data...
	I0912 14:54:26.795662    2606 main.go:141] libmachine: Parsing certificate...
	I0912 14:54:26.796153    2606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:54:27.001675    2606 main.go:141] libmachine: Creating SSH key...
	I0912 14:54:27.268596    2606 main.go:141] libmachine: Creating Disk image...
	I0912 14:54:27.268609    2606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:54:27.268780    2606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:27.277656    2606 main.go:141] libmachine: STDOUT: 
	I0912 14:54:27.277674    2606 main.go:141] libmachine: STDERR: 
	I0912 14:54:27.277733    2606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2 +20000M
	I0912 14:54:27.285036    2606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:54:27.285051    2606 main.go:141] libmachine: STDERR: 
	I0912 14:54:27.285068    2606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:27.285075    2606 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:54:27.285122    2606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:20:b7:1a:ae:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:54:27.286635    2606 main.go:141] libmachine: STDOUT: 
	I0912 14:54:27.286652    2606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:54:27.286664    2606 client.go:171] LocalClient.Create took 491.410958ms
	I0912 14:54:29.288797    2606 start.go:128] duration metric: createHost completed in 2.550485167s
	I0912 14:54:29.288865    2606 start.go:83] releasing machines lock for "multinode-914000", held for 2.550941459s
	W0912 14:54:29.289197    2606 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:54:29.299955    2606 out.go:177] 
	W0912 14:54:29.303928    2606 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:54:29.303954    2606 out.go:239] * 
	* 
	W0912 14:54:29.306622    2606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:54:29.316881    2606 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-914000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (66.745417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (101.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.38325ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-914000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- rollout status deployment/busybox: exit status 1 (56.908583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.289417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.62125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.907791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.56ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.255208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.282333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.722042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.923208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.253167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0912 14:55:40.617545    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.743541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0912 14:56:10.613957    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
E0912 14:56:10.620349    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
E0912 14:56:10.632467    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
E0912 14:56:10.653411    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.399291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0912 14:56:10.694208    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.665084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.772875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.default
E0912 14:56:10.775474    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.614875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.697041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (28.984791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (101.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0912 14:56:10.936239    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-914000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.483791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.518625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-914000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-914000 -v 3 --alsologtostderr: exit status 89 (40.31975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-914000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:11.018659    2684 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:11.018851    2684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.018854    2684 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:11.018857    2684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.018993    2684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:11.019263    2684 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:11.019485    2684 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:11.023842    2684 out.go:177] * The control plane node must be running for this command
	I0912 14:56:11.026931    2684 out.go:177]   To start a cluster, run: "minikube start -p multinode-914000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-914000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.509125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-914000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-914000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-914000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-914000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.463625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status --output json --alsologtostderr: exit status 7 (29.713459ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-914000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:11.187783    2694 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:11.187921    2694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.187924    2694 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:11.187928    2694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.188085    2694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:11.188207    2694 out.go:303] Setting JSON to true
	I0912 14:56:11.188217    2694 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:11.188272    2694 notify.go:220] Checking for updates...
	I0912 14:56:11.188416    2694 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:11.188420    2694 status.go:255] checking status of multinode-914000 ...
	I0912 14:56:11.188634    2694 status.go:330] multinode-914000 host status = "Stopped" (err=<nil>)
	I0912 14:56:11.188638    2694 status.go:343] host is not running, skipping remaining checks
	I0912 14:56:11.188640    2694 status.go:257] multinode-914000 status: &{Name:multinode-914000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-914000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.016209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 node stop m03
E0912 14:56:11.256622    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 node stop m03: exit status 85 (46.662125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-914000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status: exit status 7 (29.390667ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr: exit status 7 (29.03075ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:11.322822    2702 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:11.322965    2702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.322968    2702 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:11.322971    2702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.323092    2702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:11.323227    2702 out.go:303] Setting JSON to false
	I0912 14:56:11.323238    2702 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:11.323298    2702 notify.go:220] Checking for updates...
	I0912 14:56:11.323450    2702 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:11.323455    2702 status.go:255] checking status of multinode-914000 ...
	I0912 14:56:11.323657    2702 status.go:330] multinode-914000 host status = "Stopped" (err=<nil>)
	I0912 14:56:11.323661    2702 status.go:343] host is not running, skipping remaining checks
	I0912 14:56:11.323663    2702 status.go:257] multinode-914000 status: &{Name:multinode-914000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr": multinode-914000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.01675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 node start m03 --alsologtostderr: exit status 85 (43.949083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:11.381171    2706 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:11.381392    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.381394    2706 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:11.381397    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.381516    2706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:11.381752    2706 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:11.381949    2706 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:11.386067    2706 out.go:177] 
	W0912 14:56:11.389076    2706 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0912 14:56:11.389081    2706 out.go:239] * 
	* 
	W0912 14:56:11.390633    2706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:56:11.393018    2706 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0912 14:56:11.381171    2706 out.go:296] Setting OutFile to fd 1 ...
I0912 14:56:11.381392    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:56:11.381394    2706 out.go:309] Setting ErrFile to fd 2...
I0912 14:56:11.381397    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:56:11.381516    2706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:56:11.381752    2706 mustload.go:65] Loading cluster: multinode-914000
I0912 14:56:11.381949    2706 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:56:11.386067    2706 out.go:177] 
W0912 14:56:11.389076    2706 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0912 14:56:11.389081    2706 out.go:239] * 
* 
W0912 14:56:11.390633    2706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0912 14:56:11.393018    2706 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-914000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status: exit status 7 (30.136542ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-914000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.55475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-914000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-914000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-914000 --wait=true -v=8 --alsologtostderr
E0912 14:56:11.899194    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
E0912 14:56:13.181670    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
E0912 14:56:15.743938    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-914000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.17925925s)

                                                
                                                
-- stdout --
	* [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-914000 in cluster multinode-914000
	* Restarting existing qemu2 VM for "multinode-914000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-914000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:11.575951    2716 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:11.576090    2716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.576093    2716 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:11.576095    2716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:11.576225    2716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:11.577183    2716 out.go:303] Setting JSON to false
	I0912 14:56:11.592237    2716 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1545,"bootTime":1694554226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:56:11.592315    2716 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:56:11.597067    2716 out.go:177] * [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:56:11.602040    2716 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:56:11.602092    2716 notify.go:220] Checking for updates...
	I0912 14:56:11.606078    2716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:56:11.609008    2716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:56:11.612060    2716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:56:11.616058    2716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:56:11.618977    2716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:56:11.622329    2716 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:11.622375    2716 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:56:11.627010    2716 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 14:56:11.633998    2716 start.go:298] selected driver: qemu2
	I0912 14:56:11.634002    2716 start.go:902] validating driver "qemu2" against &{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:56:11.634078    2716 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:56:11.636063    2716 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:56:11.636089    2716 cni.go:84] Creating CNI manager for ""
	I0912 14:56:11.636094    2716 cni.go:136] 1 nodes found, recommending kindnet
	I0912 14:56:11.636103    2716 start_flags.go:321] config:
	{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:56:11.640240    2716 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:11.646984    2716 out.go:177] * Starting control plane node multinode-914000 in cluster multinode-914000
	I0912 14:56:11.650954    2716 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:56:11.650987    2716 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:56:11.650999    2716 cache.go:57] Caching tarball of preloaded images
	I0912 14:56:11.651076    2716 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:56:11.651092    2716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:56:11.651167    2716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/multinode-914000/config.json ...
	I0912 14:56:11.651556    2716 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:11.651590    2716 start.go:369] acquired machines lock for "multinode-914000" in 27.583µs
	I0912 14:56:11.651599    2716 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:56:11.651606    2716 fix.go:54] fixHost starting: 
	I0912 14:56:11.651731    2716 fix.go:102] recreateIfNeeded on multinode-914000: state=Stopped err=<nil>
	W0912 14:56:11.651742    2716 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 14:56:11.660011    2716 out.go:177] * Restarting existing qemu2 VM for "multinode-914000" ...
	I0912 14:56:11.663897    2716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:20:b7:1a:ae:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:56:11.665796    2716 main.go:141] libmachine: STDOUT: 
	I0912 14:56:11.665811    2716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:11.665840    2716 fix.go:56] fixHost completed within 14.235042ms
	I0912 14:56:11.665844    2716 start.go:83] releasing machines lock for "multinode-914000", held for 14.250417ms
	W0912 14:56:11.665849    2716 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:56:11.665892    2716 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:11.665896    2716 start.go:703] Will try again in 5 seconds ...
	I0912 14:56:16.667977    2716 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:16.668338    2716 start.go:369] acquired machines lock for "multinode-914000" in 275.875µs
	I0912 14:56:16.668470    2716 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:56:16.668491    2716 fix.go:54] fixHost starting: 
	I0912 14:56:16.669179    2716 fix.go:102] recreateIfNeeded on multinode-914000: state=Stopped err=<nil>
	W0912 14:56:16.669208    2716 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 14:56:16.677445    2716 out.go:177] * Restarting existing qemu2 VM for "multinode-914000" ...
	I0912 14:56:16.681823    2716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:20:b7:1a:ae:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:56:16.690951    2716 main.go:141] libmachine: STDOUT: 
	I0912 14:56:16.691002    2716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:16.691078    2716 fix.go:56] fixHost completed within 22.589209ms
	I0912 14:56:16.691093    2716 start.go:83] releasing machines lock for "multinode-914000", held for 22.736166ms
	W0912 14:56:16.691274    2716 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:16.699565    2716 out.go:177] 
	W0912 14:56:16.703638    2716 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:56:16.703662    2716 out.go:239] * 
	* 
	W0912 14:56:16.706106    2716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:56:16.714609    2716 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-914000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-914000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (32.968667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 node delete m03: exit status 89 (37.976583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-914000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-914000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr: exit status 7 (29.008792ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:16.893708    2730 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:16.893841    2730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:16.893844    2730 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:16.893847    2730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:16.893966    2730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:16.894085    2730 out.go:303] Setting JSON to false
	I0912 14:56:16.894097    2730 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:16.894397    2730 notify.go:220] Checking for updates...
	I0912 14:56:16.894869    2730 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:16.894887    2730 status.go:255] checking status of multinode-914000 ...
	I0912 14:56:16.895304    2730 status.go:330] multinode-914000 host status = "Stopped" (err=<nil>)
	I0912 14:56:16.895309    2730 status.go:343] host is not running, skipping remaining checks
	I0912 14:56:16.895311    2730 status.go:257] multinode-914000 status: &{Name:multinode-914000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (29.696834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status: exit status 7 (30.323ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr: exit status 7 (29.598875ms)

                                                
                                                
-- stdout --
	multinode-914000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:17.044911    2738 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:17.045075    2738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:17.045078    2738 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:17.045080    2738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:17.045216    2738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:17.045369    2738 out.go:303] Setting JSON to false
	I0912 14:56:17.045381    2738 mustload.go:65] Loading cluster: multinode-914000
	I0912 14:56:17.045446    2738 notify.go:220] Checking for updates...
	I0912 14:56:17.045578    2738 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:17.045583    2738 status.go:255] checking status of multinode-914000 ...
	I0912 14:56:17.045792    2738 status.go:330] multinode-914000 host status = "Stopped" (err=<nil>)
	I0912 14:56:17.045796    2738 status.go:343] host is not running, skipping remaining checks
	I0912 14:56:17.045802    2738 status.go:257] multinode-914000 status: &{Name:multinode-914000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr": multinode-914000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-914000 status --alsologtostderr": multinode-914000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (28.882583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-914000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E0912 14:56:20.866317    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-914000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.174031875s)

                                                
                                                
-- stdout --
	* [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-914000 in cluster multinode-914000
	* Restarting existing qemu2 VM for "multinode-914000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-914000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:17.102617    2742 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:17.102738    2742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:17.102741    2742 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:17.102744    2742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:17.102854    2742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:17.103825    2742 out.go:303] Setting JSON to false
	I0912 14:56:17.118705    2742 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1551,"bootTime":1694554226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:56:17.118773    2742 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:56:17.123658    2742 out.go:177] * [multinode-914000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:56:17.131611    2742 notify.go:220] Checking for updates...
	I0912 14:56:17.131614    2742 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:56:17.135683    2742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:56:17.138691    2742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:56:17.141651    2742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:56:17.144698    2742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:56:17.147715    2742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:56:17.150908    2742 config.go:182] Loaded profile config "multinode-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:17.151164    2742 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:56:17.155593    2742 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 14:56:17.162677    2742 start.go:298] selected driver: qemu2
	I0912 14:56:17.162681    2742 start.go:902] validating driver "qemu2" against &{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:56:17.162739    2742 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:56:17.164631    2742 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:56:17.164658    2742 cni.go:84] Creating CNI manager for ""
	I0912 14:56:17.164662    2742 cni.go:136] 1 nodes found, recommending kindnet
	I0912 14:56:17.164669    2742 start_flags.go:321] config:
	{Name:multinode-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-914000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:56:17.168631    2742 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:17.175644    2742 out.go:177] * Starting control plane node multinode-914000 in cluster multinode-914000
	I0912 14:56:17.179664    2742 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:56:17.179685    2742 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:56:17.179700    2742 cache.go:57] Caching tarball of preloaded images
	I0912 14:56:17.179771    2742 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 14:56:17.179776    2742 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:56:17.179837    2742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/multinode-914000/config.json ...
	I0912 14:56:17.180223    2742 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:17.180250    2742 start.go:369] acquired machines lock for "multinode-914000" in 20.875µs
	I0912 14:56:17.180260    2742 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:56:17.180266    2742 fix.go:54] fixHost starting: 
	I0912 14:56:17.180386    2742 fix.go:102] recreateIfNeeded on multinode-914000: state=Stopped err=<nil>
	W0912 14:56:17.180396    2742 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 14:56:17.188615    2742 out.go:177] * Restarting existing qemu2 VM for "multinode-914000" ...
	I0912 14:56:17.192703    2742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:20:b7:1a:ae:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:56:17.194546    2742 main.go:141] libmachine: STDOUT: 
	I0912 14:56:17.194567    2742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:17.194596    2742 fix.go:56] fixHost completed within 14.330292ms
	I0912 14:56:17.194601    2742 start.go:83] releasing machines lock for "multinode-914000", held for 14.346917ms
	W0912 14:56:17.194607    2742 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:56:17.194639    2742 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:17.194644    2742 start.go:703] Will try again in 5 seconds ...
	I0912 14:56:22.196707    2742 start.go:365] acquiring machines lock for multinode-914000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:22.197012    2742 start.go:369] acquired machines lock for "multinode-914000" in 244.084µs
	I0912 14:56:22.197140    2742 start.go:96] Skipping create...Using existing machine configuration
	I0912 14:56:22.197159    2742 fix.go:54] fixHost starting: 
	I0912 14:56:22.197862    2742 fix.go:102] recreateIfNeeded on multinode-914000: state=Stopped err=<nil>
	W0912 14:56:22.197887    2742 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 14:56:22.202269    2742 out.go:177] * Restarting existing qemu2 VM for "multinode-914000" ...
	I0912 14:56:22.206421    2742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:20:b7:1a:ae:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/multinode-914000/disk.qcow2
	I0912 14:56:22.215493    2742 main.go:141] libmachine: STDOUT: 
	I0912 14:56:22.215559    2742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:22.215635    2742 fix.go:56] fixHost completed within 18.476792ms
	I0912 14:56:22.215659    2742 start.go:83] releasing machines lock for "multinode-914000", held for 18.6205ms
	W0912 14:56:22.215884    2742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-914000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:22.223279    2742 out.go:177] 
	W0912 14:56:22.227329    2742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:56:22.227362    2742 out.go:239] * 
	* 
	W0912 14:56:22.229872    2742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:56:22.237073    2742 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-914000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (66.860625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-914000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-914000-m01 --driver=qemu2 
E0912 14:56:31.108829    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-914000-m01 --driver=qemu2 : exit status 80 (9.919382292s)

                                                
                                                
-- stdout --
	* [multinode-914000-m01] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-914000-m01 in cluster multinode-914000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-914000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-914000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-914000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-914000-m02 --driver=qemu2 : exit status 80 (10.213630791s)

                                                
                                                
-- stdout --
	* [multinode-914000-m02] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-914000-m02 in cluster multinode-914000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-914000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-914000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-914000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-914000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-914000: exit status 89 (77.844125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-914000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-914000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-914000 -n multinode-914000: exit status 7 (30.19ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-914000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                    
x
+
TestPreload (10.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0912 14:56:51.591120    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.074270125s)

                                                
                                                
-- stdout --
	* [test-preload-959000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-959000 in cluster test-preload-959000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:56:42.847160    2799 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:56:42.847282    2799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:42.847286    2799 out.go:309] Setting ErrFile to fd 2...
	I0912 14:56:42.847288    2799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:56:42.847413    2799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:56:42.848448    2799 out.go:303] Setting JSON to false
	I0912 14:56:42.863644    2799 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1576,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:56:42.863712    2799 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:56:42.868011    2799 out.go:177] * [test-preload-959000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:56:42.875799    2799 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:56:42.879919    2799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:56:42.875853    2799 notify.go:220] Checking for updates...
	I0912 14:56:42.884322    2799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:56:42.887918    2799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:56:42.890957    2799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:56:42.893939    2799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:56:42.897330    2799 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:56:42.897377    2799 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:56:42.901892    2799 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 14:56:42.908927    2799 start.go:298] selected driver: qemu2
	I0912 14:56:42.908931    2799 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:56:42.908937    2799 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:56:42.910903    2799 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:56:42.913893    2799 out.go:177] * Automatically selected the socket_vmnet network
	I0912 14:56:42.917082    2799 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 14:56:42.917103    2799 cni.go:84] Creating CNI manager for ""
	I0912 14:56:42.917111    2799 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:56:42.917116    2799 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 14:56:42.917122    2799 start_flags.go:321] config:
	{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:56:42.921268    2799 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.928941    2799 out.go:177] * Starting control plane node test-preload-959000 in cluster test-preload-959000
	I0912 14:56:42.932910    2799 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0912 14:56:42.933007    2799 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/test-preload-959000/config.json ...
	I0912 14:56:42.933036    2799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/test-preload-959000/config.json: {Name:mkca2122f231b05346fea21960604deb23adfd8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:56:42.933040    2799 cache.go:107] acquiring lock: {Name:mkc1a77caa83518e0594a0d738906ba672cfffcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933049    2799 cache.go:107] acquiring lock: {Name:mkd7621d0468747dfc606d10ca7bd5671bee1742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933054    2799 cache.go:107] acquiring lock: {Name:mk44dc3a797778087818a2f9f2fcde1c945e1e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933219    2799 cache.go:107] acquiring lock: {Name:mk73d44b05644f2be977ed10d72e2d6d59d22a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933279    2799 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0912 14:56:42.933297    2799 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0912 14:56:42.933286    2799 cache.go:107] acquiring lock: {Name:mk307af00b20a192aaab1d6a1215ee9792e69992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933323    2799 cache.go:107] acquiring lock: {Name:mka58eb8629b3e914db1b21839fc7c68a9046152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933345    2799 cache.go:107] acquiring lock: {Name:mk2c960a3571c15a4669dcbcdf20f815c117f99c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.933394    2799 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0912 14:56:42.933332    2799 start.go:365] acquiring machines lock for test-preload-959000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:42.933434    2799 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:56:42.933448    2799 start.go:369] acquired machines lock for "test-preload-959000" in 33µs
	I0912 14:56:42.933454    2799 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0912 14:56:42.933461    2799 start.go:93] Provisioning new machine with config: &{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:56:42.933504    2799 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:56:42.933467    2799 cache.go:107] acquiring lock: {Name:mkfbcd602251b548a7bc90636b89c1d599646011 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:56:42.940883    2799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 14:56:42.933544    2799 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 14:56:42.933611    2799 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0912 14:56:42.933633    2799 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0912 14:56:42.946142    2799 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0912 14:56:42.946429    2799 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 14:56:42.946775    2799 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0912 14:56:42.946861    2799 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0912 14:56:42.950471    2799 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0912 14:56:42.950522    2799 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0912 14:56:42.950576    2799 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0912 14:56:42.950592    2799 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0912 14:56:42.957952    2799 start.go:159] libmachine.API.Create for "test-preload-959000" (driver="qemu2")
	I0912 14:56:42.957971    2799 client.go:168] LocalClient.Create starting
	I0912 14:56:42.958040    2799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:56:42.958065    2799 main.go:141] libmachine: Decoding PEM data...
	I0912 14:56:42.958079    2799 main.go:141] libmachine: Parsing certificate...
	I0912 14:56:42.958119    2799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:56:42.958138    2799 main.go:141] libmachine: Decoding PEM data...
	I0912 14:56:42.958147    2799 main.go:141] libmachine: Parsing certificate...
	I0912 14:56:42.958485    2799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:56:43.267028    2799 main.go:141] libmachine: Creating SSH key...
	I0912 14:56:43.315176    2799 main.go:141] libmachine: Creating Disk image...
	I0912 14:56:43.315191    2799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:56:43.315344    2799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:43.324586    2799 main.go:141] libmachine: STDOUT: 
	I0912 14:56:43.324611    2799 main.go:141] libmachine: STDERR: 
	I0912 14:56:43.324677    2799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2 +20000M
	I0912 14:56:43.332464    2799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:56:43.332482    2799 main.go:141] libmachine: STDERR: 
	I0912 14:56:43.332499    2799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:43.332508    2799 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:56:43.332557    2799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:0c:d9:e6:03:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:43.334207    2799 main.go:141] libmachine: STDOUT: 
	I0912 14:56:43.334221    2799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:43.334240    2799 client.go:171] LocalClient.Create took 376.271583ms
	I0912 14:56:43.770881    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0912 14:56:43.901775    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0912 14:56:43.901795    2799 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 968.629708ms
	I0912 14:56:43.901805    2799 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0912 14:56:44.021488    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0912 14:56:44.222429    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0912 14:56:44.449456    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0912 14:56:44.451953    2799 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0912 14:56:44.451969    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0912 14:56:44.677111    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0912 14:56:44.844433    2799 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0912 14:56:44.844468    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0912 14:56:45.143624    2799 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0912 14:56:45.279629    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 14:56:45.279681    2799 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.346689125s
	I0912 14:56:45.279704    2799 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 14:56:45.334591    2799 start.go:128] duration metric: createHost completed in 2.401110917s
	I0912 14:56:45.334643    2799 start.go:83] releasing machines lock for "test-preload-959000", held for 2.4012305s
	W0912 14:56:45.334698    2799 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:45.340791    2799 out.go:177] * Deleting "test-preload-959000" in qemu2 ...
	W0912 14:56:45.359609    2799 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:45.359646    2799 start.go:703] Will try again in 5 seconds ...
	I0912 14:56:46.343832    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0912 14:56:46.343883    2799 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.410642417s
	I0912 14:56:46.343914    2799 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0912 14:56:47.725586    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0912 14:56:47.725640    2799 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.792461084s
	I0912 14:56:47.725690    2799 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0912 14:56:48.647612    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0912 14:56:48.647682    2799 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.714755333s
	I0912 14:56:48.647713    2799 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0912 14:56:48.779216    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0912 14:56:48.779276    2799 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.846353333s
	I0912 14:56:48.779303    2799 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0912 14:56:49.909636    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0912 14:56:49.909699    2799 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.976487875s
	I0912 14:56:49.909725    2799 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0912 14:56:50.359952    2799 start.go:365] acquiring machines lock for test-preload-959000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 14:56:50.360460    2799 start.go:369] acquired machines lock for "test-preload-959000" in 420.792µs
	I0912 14:56:50.360597    2799 start.go:93] Provisioning new machine with config: &{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 14:56:50.361025    2799 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 14:56:50.370653    2799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 14:56:50.417802    2799 start.go:159] libmachine.API.Create for "test-preload-959000" (driver="qemu2")
	I0912 14:56:50.417848    2799 client.go:168] LocalClient.Create starting
	I0912 14:56:50.417965    2799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 14:56:50.418016    2799 main.go:141] libmachine: Decoding PEM data...
	I0912 14:56:50.418043    2799 main.go:141] libmachine: Parsing certificate...
	I0912 14:56:50.418110    2799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 14:56:50.418144    2799 main.go:141] libmachine: Decoding PEM data...
	I0912 14:56:50.418165    2799 main.go:141] libmachine: Parsing certificate...
	I0912 14:56:50.418622    2799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 14:56:50.754080    2799 main.go:141] libmachine: Creating SSH key...
	I0912 14:56:50.831408    2799 main.go:141] libmachine: Creating Disk image...
	I0912 14:56:50.831415    2799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 14:56:50.831573    2799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:50.840117    2799 main.go:141] libmachine: STDOUT: 
	I0912 14:56:50.840131    2799 main.go:141] libmachine: STDERR: 
	I0912 14:56:50.840183    2799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2 +20000M
	I0912 14:56:50.847398    2799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 14:56:50.847420    2799 main.go:141] libmachine: STDERR: 
	I0912 14:56:50.847434    2799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:50.847452    2799 main.go:141] libmachine: Starting QEMU VM...
	I0912 14:56:50.847486    2799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:bd:01:96:6c:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/test-preload-959000/disk.qcow2
	I0912 14:56:50.849101    2799 main.go:141] libmachine: STDOUT: 
	I0912 14:56:50.849115    2799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 14:56:50.849131    2799 client.go:171] LocalClient.Create took 431.287333ms
	I0912 14:56:51.978569    2799 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0912 14:56:51.978626    2799 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.045583334s
	I0912 14:56:51.978665    2799 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0912 14:56:51.978716    2799 cache.go:87] Successfully saved all images to host disk.
	I0912 14:56:52.849221    2799 start.go:128] duration metric: createHost completed in 2.488223625s
	I0912 14:56:52.853898    2799 start.go:83] releasing machines lock for "test-preload-959000", held for 2.493454625s
	W0912 14:56:52.854160    2799 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 14:56:52.862557    2799 out.go:177] 
	W0912 14:56:52.867494    2799 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 14:56:52.867550    2799 out.go:239] * 
	* 
	W0912 14:56:52.870143    2799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:56:52.879469    2799 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-09-12 14:56:52.897725 -0700 PDT m=+836.257262459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-959000 -n test-preload-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-959000 -n test-preload-959000: exit status 7 (65.030958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-959000
--- FAIL: TestPreload (10.24s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-726000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-726000 --memory=2048 --driver=qemu2 : exit status 80 (9.925498334s)

                                                
                                                
-- stdout --
	* [scheduled-stop-726000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-726000 in cluster scheduled-stop-726000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-726000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-726000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-726000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-726000 in cluster scheduled-stop-726000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-726000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-726000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-12 14:57:02.987834 -0700 PDT m=+846.347573417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-726000 -n scheduled-stop-726000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-726000 -n scheduled-stop-726000: exit status 7 (66.48925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-726000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-726000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-726000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (13.45s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1882383676 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-269000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-269000 --memory=2600 --driver=qemu2 : exit status 80 (9.972254875s)

                                                
                                                
-- stdout --
	* [skaffold-269000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-269000 in cluster skaffold-269000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-269000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-269000 in cluster skaffold-269000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-09-12 14:57:16.44063 -0700 PDT m=+859.800637751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-269000 -n skaffold-269000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-269000 -n skaffold-269000: exit status 7 (62.004458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-269000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-269000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-269000
--- FAIL: TestSkaffold (13.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (149.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0912 14:57:56.752024    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:58:24.456855    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:58:54.474010    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-12 15:00:26.080099 -0700 PDT m=+1049.443894001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-755000 -n running-upgrade-755000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-755000 -n running-upgrade-755000: exit status 85 (82.77025ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-755000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-755000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-755000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-755000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-755000\"")
helpers_test.go:175: Cleaning up "running-upgrade-755000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-755000
--- FAIL: TestRunningBinaryUpgrade (149.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.849435542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-301000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-301000 in cluster kubernetes-upgrade-301000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:00:26.434006    3295 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:00:26.434140    3295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:00:26.434143    3295 out.go:309] Setting ErrFile to fd 2...
	I0912 15:00:26.434146    3295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:00:26.434266    3295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:00:26.435266    3295 out.go:303] Setting JSON to false
	I0912 15:00:26.450234    3295 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1800,"bootTime":1694554226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:00:26.450317    3295 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:00:26.455486    3295 out.go:177] * [kubernetes-upgrade-301000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:00:26.462590    3295 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:00:26.465516    3295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:00:26.462653    3295 notify.go:220] Checking for updates...
	I0912 15:00:26.468543    3295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:00:26.471582    3295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:00:26.474526    3295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:00:26.477564    3295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:00:26.480887    3295 config.go:182] Loaded profile config "cert-expiration-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:00:26.480952    3295 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:00:26.481002    3295 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:00:26.485479    3295 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:00:26.492500    3295 start.go:298] selected driver: qemu2
	I0912 15:00:26.492504    3295 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:00:26.492509    3295 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:00:26.494467    3295 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:00:26.497404    3295 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:00:26.500584    3295 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 15:00:26.500604    3295 cni.go:84] Creating CNI manager for ""
	I0912 15:00:26.500620    3295 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:00:26.500624    3295 start_flags.go:321] config:
	{Name:kubernetes-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:00:26.504805    3295 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:00:26.511487    3295 out.go:177] * Starting control plane node kubernetes-upgrade-301000 in cluster kubernetes-upgrade-301000
	I0912 15:00:26.515493    3295 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 15:00:26.515514    3295 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 15:00:26.515527    3295 cache.go:57] Caching tarball of preloaded images
	I0912 15:00:26.515597    3295 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:00:26.515610    3295 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 15:00:26.515673    3295 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kubernetes-upgrade-301000/config.json ...
	I0912 15:00:26.515687    3295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kubernetes-upgrade-301000/config.json: {Name:mk760e9b79d427333b075941950502b963d15e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:00:26.515896    3295 start.go:365] acquiring machines lock for kubernetes-upgrade-301000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:00:26.515931    3295 start.go:369] acquired machines lock for "kubernetes-upgrade-301000" in 25.375µs
	I0912 15:00:26.515943    3295 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:00:26.515979    3295 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:00:26.520486    3295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:00:26.536478    3295 start.go:159] libmachine.API.Create for "kubernetes-upgrade-301000" (driver="qemu2")
	I0912 15:00:26.536501    3295 client.go:168] LocalClient.Create starting
	I0912 15:00:26.536563    3295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:00:26.536591    3295 main.go:141] libmachine: Decoding PEM data...
	I0912 15:00:26.536604    3295 main.go:141] libmachine: Parsing certificate...
	I0912 15:00:26.536642    3295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:00:26.536662    3295 main.go:141] libmachine: Decoding PEM data...
	I0912 15:00:26.536676    3295 main.go:141] libmachine: Parsing certificate...
	I0912 15:00:26.536991    3295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:00:26.651723    3295 main.go:141] libmachine: Creating SSH key...
	I0912 15:00:26.806649    3295 main.go:141] libmachine: Creating Disk image...
	I0912 15:00:26.806656    3295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:00:26.806841    3295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:26.816490    3295 main.go:141] libmachine: STDOUT: 
	I0912 15:00:26.816516    3295 main.go:141] libmachine: STDERR: 
	I0912 15:00:26.816576    3295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2 +20000M
	I0912 15:00:26.823756    3295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:00:26.823769    3295 main.go:141] libmachine: STDERR: 
	I0912 15:00:26.823786    3295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:26.823801    3295 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:00:26.823834    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:51:b0:62:3b:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:26.825326    3295 main.go:141] libmachine: STDOUT: 
	I0912 15:00:26.825338    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:00:26.825355    3295 client.go:171] LocalClient.Create took 288.854834ms
	I0912 15:00:28.827481    3295 start.go:128] duration metric: createHost completed in 2.311531166s
	I0912 15:00:28.827581    3295 start.go:83] releasing machines lock for "kubernetes-upgrade-301000", held for 2.311656834s
	W0912 15:00:28.827658    3295 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:00:28.835789    3295 out.go:177] * Deleting "kubernetes-upgrade-301000" in qemu2 ...
	W0912 15:00:28.856113    3295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:00:28.856144    3295 start.go:703] Will try again in 5 seconds ...
	I0912 15:00:33.858231    3295 start.go:365] acquiring machines lock for kubernetes-upgrade-301000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:00:33.858651    3295 start.go:369] acquired machines lock for "kubernetes-upgrade-301000" in 325.916µs
	I0912 15:00:33.858752    3295 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:00:33.859009    3295 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:00:33.868962    3295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:00:33.909703    3295 start.go:159] libmachine.API.Create for "kubernetes-upgrade-301000" (driver="qemu2")
	I0912 15:00:33.909738    3295 client.go:168] LocalClient.Create starting
	I0912 15:00:33.909883    3295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:00:33.909938    3295 main.go:141] libmachine: Decoding PEM data...
	I0912 15:00:33.909958    3295 main.go:141] libmachine: Parsing certificate...
	I0912 15:00:33.910032    3295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:00:33.910068    3295 main.go:141] libmachine: Decoding PEM data...
	I0912 15:00:33.910084    3295 main.go:141] libmachine: Parsing certificate...
	I0912 15:00:33.910518    3295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:00:34.041218    3295 main.go:141] libmachine: Creating SSH key...
	I0912 15:00:34.196436    3295 main.go:141] libmachine: Creating Disk image...
	I0912 15:00:34.196446    3295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:00:34.196598    3295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:34.205117    3295 main.go:141] libmachine: STDOUT: 
	I0912 15:00:34.205132    3295 main.go:141] libmachine: STDERR: 
	I0912 15:00:34.205198    3295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2 +20000M
	I0912 15:00:34.212495    3295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:00:34.212510    3295 main.go:141] libmachine: STDERR: 
	I0912 15:00:34.212524    3295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:34.212532    3295 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:00:34.212574    3295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:e1:1a:43:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:34.214092    3295 main.go:141] libmachine: STDOUT: 
	I0912 15:00:34.214105    3295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:00:34.214118    3295 client.go:171] LocalClient.Create took 304.380958ms
	I0912 15:00:36.216247    3295 start.go:128] duration metric: createHost completed in 2.357259916s
	I0912 15:00:36.216310    3295 start.go:83] releasing machines lock for "kubernetes-upgrade-301000", held for 2.357679834s
	W0912 15:00:36.216660    3295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:00:36.225468    3295 out.go:177] 
	W0912 15:00:36.230520    3295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:00:36.230562    3295 out.go:239] * 
	* 
	W0912 15:00:36.233361    3295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:00:36.242471    3295 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-301000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-301000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-301000 status --format={{.Host}}: exit status 7 (36.612666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.192792958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-301000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-301000 in cluster kubernetes-upgrade-301000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:00:36.422647    3314 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:00:36.422775    3314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:00:36.422778    3314 out.go:309] Setting ErrFile to fd 2...
	I0912 15:00:36.422781    3314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:00:36.422908    3314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:00:36.423826    3314 out.go:303] Setting JSON to false
	I0912 15:00:36.438714    3314 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1810,"bootTime":1694554226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:00:36.438774    3314 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:00:36.444821    3314 out.go:177] * [kubernetes-upgrade-301000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:00:36.454807    3314 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:00:36.459654    3314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:00:36.454849    3314 notify.go:220] Checking for updates...
	I0912 15:00:36.474821    3314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:00:36.477755    3314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:00:36.480778    3314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:00:36.483780    3314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:00:36.485474    3314 config.go:182] Loaded profile config "kubernetes-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0912 15:00:36.485772    3314 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:00:36.489725    3314 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:00:36.496618    3314 start.go:298] selected driver: qemu2
	I0912 15:00:36.496623    3314 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:00:36.496693    3314 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:00:36.498857    3314 cni.go:84] Creating CNI manager for ""
	I0912 15:00:36.498876    3314 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:00:36.498882    3314 start_flags.go:321] config:
	{Name:kubernetes-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-301000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:00:36.503263    3314 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:00:36.510745    3314 out.go:177] * Starting control plane node kubernetes-upgrade-301000 in cluster kubernetes-upgrade-301000
	I0912 15:00:36.514752    3314 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:00:36.514784    3314 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:00:36.514803    3314 cache.go:57] Caching tarball of preloaded images
	I0912 15:00:36.514877    3314 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:00:36.514883    3314 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:00:36.514951    3314 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kubernetes-upgrade-301000/config.json ...
	I0912 15:00:36.515370    3314 start.go:365] acquiring machines lock for kubernetes-upgrade-301000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:00:36.515407    3314 start.go:369] acquired machines lock for "kubernetes-upgrade-301000" in 30.125µs
	I0912 15:00:36.515419    3314 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:00:36.515428    3314 fix.go:54] fixHost starting: 
	I0912 15:00:36.515577    3314 fix.go:102] recreateIfNeeded on kubernetes-upgrade-301000: state=Stopped err=<nil>
	W0912 15:00:36.515591    3314 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:00:36.522683    3314 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-301000" ...
	I0912 15:00:36.526821    3314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:e1:1a:43:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:36.529031    3314 main.go:141] libmachine: STDOUT: 
	I0912 15:00:36.529048    3314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:00:36.529086    3314 fix.go:56] fixHost completed within 13.659584ms
	I0912 15:00:36.529091    3314 start.go:83] releasing machines lock for "kubernetes-upgrade-301000", held for 13.678833ms
	W0912 15:00:36.529109    3314 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:00:36.529163    3314 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:00:36.529169    3314 start.go:703] Will try again in 5 seconds ...
	I0912 15:00:41.531296    3314 start.go:365] acquiring machines lock for kubernetes-upgrade-301000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:00:41.531754    3314 start.go:369] acquired machines lock for "kubernetes-upgrade-301000" in 380.583µs
	I0912 15:00:41.531907    3314 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:00:41.531929    3314 fix.go:54] fixHost starting: 
	I0912 15:00:41.532652    3314 fix.go:102] recreateIfNeeded on kubernetes-upgrade-301000: state=Stopped err=<nil>
	W0912 15:00:41.532678    3314 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:00:41.542021    3314 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-301000" ...
	I0912 15:00:41.546272    3314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:75:e1:1a:43:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubernetes-upgrade-301000/disk.qcow2
	I0912 15:00:41.555472    3314 main.go:141] libmachine: STDOUT: 
	I0912 15:00:41.555527    3314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:00:41.555611    3314 fix.go:56] fixHost completed within 23.686125ms
	I0912 15:00:41.555626    3314 start.go:83] releasing machines lock for "kubernetes-upgrade-301000", held for 23.848875ms
	W0912 15:00:41.555777    3314 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:00:41.562084    3314 out.go:177] 
	W0912 15:00:41.565041    3314 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:00:41.565068    3314 out.go:239] * 
	* 
	W0912 15:00:41.567795    3314 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:00:41.576027    3314 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-301000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-301000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-301000 version --output=json: exit status 1 (66.344917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-301000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-09-12 15:00:41.656391 -0700 PDT m=+1065.020497584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-301000 -n kubernetes-upgrade-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-301000 -n kubernetes-upgrade-301000: exit status 7 (33.23825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-301000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-301000
--- FAIL: TestKubernetesUpgrade (15.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17194
- KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current560963879/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.14s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17194
- KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2363485039/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (162.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (162.59s)

                                                
                                    
x
+
TestPause/serial/Start (9.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-577000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-577000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.681975625s)

                                                
                                                
-- stdout --
	* [pause-577000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-577000 in cluster pause-577000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-577000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-577000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-577000 -n pause-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-577000 -n pause-577000: exit status 7 (67.575292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 
E0912 15:01:10.607999    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 : exit status 80 (9.833250708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-647000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-647000 in cluster NoKubernetes-647000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000: exit status 7 (67.773125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401879917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-647000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-647000
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-647000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000: exit status 7 (68.714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 : exit status 80 (5.40604525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-647000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-647000
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-647000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000: exit status 7 (68.642792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 : exit status 80 (5.396589291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-647000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-647000
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-647000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-647000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-647000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-647000 -n NoKubernetes-647000: exit status 7 (68.244583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0912 15:01:38.313378    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/ingress-addon-legacy-627000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.897210375s)

                                                
                                                
-- stdout --
	* [auto-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-786000 in cluster auto-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:01:33.302401    3437 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:01:33.302539    3437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:33.302542    3437 out.go:309] Setting ErrFile to fd 2...
	I0912 15:01:33.302545    3437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:33.302669    3437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:01:33.303669    3437 out.go:303] Setting JSON to false
	I0912 15:01:33.318711    3437 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1867,"bootTime":1694554226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:01:33.318810    3437 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:01:33.323129    3437 out.go:177] * [auto-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:01:33.331088    3437 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:01:33.331143    3437 notify.go:220] Checking for updates...
	I0912 15:01:33.334964    3437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:01:33.338025    3437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:01:33.341080    3437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:01:33.344039    3437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:01:33.347083    3437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:01:33.350416    3437 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:01:33.350460    3437 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:01:33.354909    3437 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:01:33.362085    3437 start.go:298] selected driver: qemu2
	I0912 15:01:33.362091    3437 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:01:33.362097    3437 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:01:33.363988    3437 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:01:33.367030    3437 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:01:33.370066    3437 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:01:33.370087    3437 cni.go:84] Creating CNI manager for ""
	I0912 15:01:33.370094    3437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:01:33.370098    3437 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:01:33.370104    3437 start_flags.go:321] config:
	{Name:auto-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0912 15:01:33.374177    3437 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:01:33.380956    3437 out.go:177] * Starting control plane node auto-786000 in cluster auto-786000
	I0912 15:01:33.385077    3437 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:01:33.385098    3437 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:01:33.385108    3437 cache.go:57] Caching tarball of preloaded images
	I0912 15:01:33.385173    3437 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:01:33.385179    3437 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:01:33.385278    3437 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/auto-786000/config.json ...
	I0912 15:01:33.385291    3437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/auto-786000/config.json: {Name:mk2f5648beb37c6ad5bf7665349e911f7d76cd7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:01:33.385505    3437 start.go:365] acquiring machines lock for auto-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:01:33.385536    3437 start.go:369] acquired machines lock for "auto-786000" in 25.791µs
	I0912 15:01:33.385550    3437 start.go:93] Provisioning new machine with config: &{Name:auto-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:01:33.385594    3437 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:01:33.393014    3437 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:01:33.409082    3437 start.go:159] libmachine.API.Create for "auto-786000" (driver="qemu2")
	I0912 15:01:33.409107    3437 client.go:168] LocalClient.Create starting
	I0912 15:01:33.409178    3437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:01:33.409206    3437 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:33.409219    3437 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:33.409262    3437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:01:33.409281    3437 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:33.409287    3437 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:33.409645    3437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:01:33.532295    3437 main.go:141] libmachine: Creating SSH key...
	I0912 15:01:33.706901    3437 main.go:141] libmachine: Creating Disk image...
	I0912 15:01:33.706910    3437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:01:33.707056    3437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:33.716131    3437 main.go:141] libmachine: STDOUT: 
	I0912 15:01:33.716151    3437 main.go:141] libmachine: STDERR: 
	I0912 15:01:33.716222    3437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2 +20000M
	I0912 15:01:33.723506    3437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:01:33.723519    3437 main.go:141] libmachine: STDERR: 
	I0912 15:01:33.723540    3437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:33.723551    3437 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:01:33.723585    3437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:23:04:45:0b:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:33.725107    3437 main.go:141] libmachine: STDOUT: 
	I0912 15:01:33.725121    3437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:01:33.725140    3437 client.go:171] LocalClient.Create took 316.035125ms
	I0912 15:01:35.727271    3437 start.go:128] duration metric: createHost completed in 2.34170125s
	I0912 15:01:35.727334    3437 start.go:83] releasing machines lock for "auto-786000", held for 2.341834416s
	W0912 15:01:35.727402    3437 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:35.738603    3437 out.go:177] * Deleting "auto-786000" in qemu2 ...
	W0912 15:01:35.758501    3437 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:35.758532    3437 start.go:703] Will try again in 5 seconds ...
	I0912 15:01:40.760762    3437 start.go:365] acquiring machines lock for auto-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:01:40.761308    3437 start.go:369] acquired machines lock for "auto-786000" in 437.667µs
	I0912 15:01:40.761443    3437 start.go:93] Provisioning new machine with config: &{Name:auto-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:01:40.761744    3437 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:01:40.770389    3437 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:01:40.818384    3437 start.go:159] libmachine.API.Create for "auto-786000" (driver="qemu2")
	I0912 15:01:40.818429    3437 client.go:168] LocalClient.Create starting
	I0912 15:01:40.818559    3437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:01:40.818614    3437 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:40.818637    3437 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:40.818714    3437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:01:40.818749    3437 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:40.818761    3437 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:40.819313    3437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:01:40.971763    3437 main.go:141] libmachine: Creating SSH key...
	I0912 15:01:41.109375    3437 main.go:141] libmachine: Creating Disk image...
	I0912 15:01:41.109381    3437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:01:41.109546    3437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:41.118514    3437 main.go:141] libmachine: STDOUT: 
	I0912 15:01:41.118528    3437 main.go:141] libmachine: STDERR: 
	I0912 15:01:41.118589    3437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2 +20000M
	I0912 15:01:41.125726    3437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:01:41.125738    3437 main.go:141] libmachine: STDERR: 
	I0912 15:01:41.125750    3437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:41.125754    3437 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:01:41.125795    3437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f3:34:6e:da:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/auto-786000/disk.qcow2
	I0912 15:01:41.127303    3437 main.go:141] libmachine: STDOUT: 
	I0912 15:01:41.127315    3437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:01:41.127326    3437 client.go:171] LocalClient.Create took 308.898375ms
	I0912 15:01:43.129459    3437 start.go:128] duration metric: createHost completed in 2.367733042s
	I0912 15:01:43.129528    3437 start.go:83] releasing machines lock for "auto-786000", held for 2.368244709s
	W0912 15:01:43.129973    3437 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:43.140599    3437 out.go:177] 
	W0912 15:01:43.144679    3437 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:01:43.144719    3437 out.go:239] * 
	* 
	W0912 15:01:43.147334    3437 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:01:43.157540    3437 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.810273458s)

                                                
                                                
-- stdout --
	* [flannel-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-786000 in cluster flannel-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:01:45.316896    3547 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:01:45.317049    3547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:45.317052    3547 out.go:309] Setting ErrFile to fd 2...
	I0912 15:01:45.317054    3547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:45.317172    3547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:01:45.318185    3547 out.go:303] Setting JSON to false
	I0912 15:01:45.333325    3547 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1879,"bootTime":1694554226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:01:45.333403    3547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:01:45.337195    3547 out.go:177] * [flannel-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:01:45.345187    3547 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:01:45.349140    3547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:01:45.345248    3547 notify.go:220] Checking for updates...
	I0912 15:01:45.355179    3547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:01:45.358165    3547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:01:45.361197    3547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:01:45.364233    3547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:01:45.367516    3547 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:01:45.367569    3547 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:01:45.372100    3547 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:01:45.379080    3547 start.go:298] selected driver: qemu2
	I0912 15:01:45.379084    3547 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:01:45.379091    3547 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:01:45.381073    3547 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:01:45.384139    3547 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:01:45.387304    3547 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:01:45.387337    3547 cni.go:84] Creating CNI manager for "flannel"
	I0912 15:01:45.387342    3547 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0912 15:01:45.387348    3547 start_flags.go:321] config:
	{Name:flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:01:45.391539    3547 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:01:45.398120    3547 out.go:177] * Starting control plane node flannel-786000 in cluster flannel-786000
	I0912 15:01:45.402026    3547 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:01:45.402050    3547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:01:45.402062    3547 cache.go:57] Caching tarball of preloaded images
	I0912 15:01:45.402127    3547 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:01:45.402137    3547 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:01:45.402219    3547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/flannel-786000/config.json ...
	I0912 15:01:45.402232    3547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/flannel-786000/config.json: {Name:mk36d73d723567287ce64f9de847bda355fc69d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:01:45.402459    3547 start.go:365] acquiring machines lock for flannel-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:01:45.402492    3547 start.go:369] acquired machines lock for "flannel-786000" in 26.458µs
	I0912 15:01:45.402505    3547 start.go:93] Provisioning new machine with config: &{Name:flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:01:45.402539    3547 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:01:45.410111    3547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:01:45.426696    3547 start.go:159] libmachine.API.Create for "flannel-786000" (driver="qemu2")
	I0912 15:01:45.426729    3547 client.go:168] LocalClient.Create starting
	I0912 15:01:45.426794    3547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:01:45.426824    3547 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:45.426837    3547 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:45.426883    3547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:01:45.426907    3547 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:45.426916    3547 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:45.427293    3547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:01:45.636032    3547 main.go:141] libmachine: Creating SSH key...
	I0912 15:01:45.706529    3547 main.go:141] libmachine: Creating Disk image...
	I0912 15:01:45.706535    3547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:01:45.706685    3547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:45.715367    3547 main.go:141] libmachine: STDOUT: 
	I0912 15:01:45.715386    3547 main.go:141] libmachine: STDERR: 
	I0912 15:01:45.715434    3547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2 +20000M
	I0912 15:01:45.722594    3547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:01:45.722605    3547 main.go:141] libmachine: STDERR: 
	I0912 15:01:45.722621    3547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:45.722628    3547 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:01:45.722669    3547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:60:73:9b:a9:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:45.724242    3547 main.go:141] libmachine: STDOUT: 
	I0912 15:01:45.724257    3547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:01:45.724280    3547 client.go:171] LocalClient.Create took 297.544709ms
	I0912 15:01:47.726401    3547 start.go:128] duration metric: createHost completed in 2.323889209s
	I0912 15:01:47.726474    3547 start.go:83] releasing machines lock for "flannel-786000", held for 2.324018292s
	W0912 15:01:47.726530    3547 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:47.738670    3547 out.go:177] * Deleting "flannel-786000" in qemu2 ...
	W0912 15:01:47.758636    3547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:47.758678    3547 start.go:703] Will try again in 5 seconds ...
	I0912 15:01:52.760810    3547 start.go:365] acquiring machines lock for flannel-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:01:52.761255    3547 start.go:369] acquired machines lock for "flannel-786000" in 351.167µs
	I0912 15:01:52.761389    3547 start.go:93] Provisioning new machine with config: &{Name:flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:01:52.761744    3547 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:01:52.771412    3547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:01:52.818078    3547 start.go:159] libmachine.API.Create for "flannel-786000" (driver="qemu2")
	I0912 15:01:52.818129    3547 client.go:168] LocalClient.Create starting
	I0912 15:01:52.818309    3547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:01:52.818384    3547 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:52.818405    3547 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:52.818485    3547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:01:52.818526    3547 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:52.818553    3547 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:52.819129    3547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:01:52.950931    3547 main.go:141] libmachine: Creating SSH key...
	I0912 15:01:53.030311    3547 main.go:141] libmachine: Creating Disk image...
	I0912 15:01:53.030316    3547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:01:53.030453    3547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:53.038897    3547 main.go:141] libmachine: STDOUT: 
	I0912 15:01:53.038914    3547 main.go:141] libmachine: STDERR: 
	I0912 15:01:53.038975    3547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2 +20000M
	I0912 15:01:53.046071    3547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:01:53.046085    3547 main.go:141] libmachine: STDERR: 
	I0912 15:01:53.046098    3547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:53.046104    3547 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:01:53.046140    3547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:79:c5:d1:0b:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/flannel-786000/disk.qcow2
	I0912 15:01:53.047712    3547 main.go:141] libmachine: STDOUT: 
	I0912 15:01:53.047727    3547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:01:53.047749    3547 client.go:171] LocalClient.Create took 229.613333ms
	I0912 15:01:55.049877    3547 start.go:128] duration metric: createHost completed in 2.288145375s
	I0912 15:01:55.049942    3547 start.go:83] releasing machines lock for "flannel-786000", held for 2.288709208s
	W0912 15:01:55.052449    3547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:55.067880    3547 out.go:177] 
	W0912 15:01:55.075083    3547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:01:55.075122    3547 out.go:239] * 
	* 
	W0912 15:01:55.077764    3547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:01:55.087783    3547 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.68009375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-786000 in cluster enable-default-cni-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:01:57.415748    3670 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:01:57.415876    3670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:57.415879    3670 out.go:309] Setting ErrFile to fd 2...
	I0912 15:01:57.415882    3670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:01:57.416013    3670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:01:57.417011    3670 out.go:303] Setting JSON to false
	I0912 15:01:57.432139    3670 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1891,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:01:57.432233    3670 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:01:57.437808    3670 out.go:177] * [enable-default-cni-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:01:57.444856    3670 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:01:57.448807    3670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:01:57.444923    3670 notify.go:220] Checking for updates...
	I0912 15:01:57.452798    3670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:01:57.455809    3670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:01:57.458784    3670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:01:57.461748    3670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:01:57.465097    3670 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:01:57.465153    3670 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:01:57.469824    3670 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:01:57.476788    3670 start.go:298] selected driver: qemu2
	I0912 15:01:57.476792    3670 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:01:57.476798    3670 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:01:57.478800    3670 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:01:57.482838    3670 out.go:177] * Automatically selected the socket_vmnet network
	E0912 15:01:57.485865    3670 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0912 15:01:57.485878    3670 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:01:57.485904    3670 cni.go:84] Creating CNI manager for "bridge"
	I0912 15:01:57.485909    3670 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:01:57.485915    3670 start_flags.go:321] config:
	{Name:enable-default-cni-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:01:57.490130    3670 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:01:57.495680    3670 out.go:177] * Starting control plane node enable-default-cni-786000 in cluster enable-default-cni-786000
	I0912 15:01:57.499774    3670 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:01:57.499794    3670 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:01:57.499811    3670 cache.go:57] Caching tarball of preloaded images
	I0912 15:01:57.499879    3670 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:01:57.499889    3670 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:01:57.499961    3670 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/enable-default-cni-786000/config.json ...
	I0912 15:01:57.499975    3670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/enable-default-cni-786000/config.json: {Name:mk4f02791d6aba5a55796e15c45f3af1e6ef971b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:01:57.500198    3670 start.go:365] acquiring machines lock for enable-default-cni-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:01:57.500235    3670 start.go:369] acquired machines lock for "enable-default-cni-786000" in 26.875µs
	I0912 15:01:57.500250    3670 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:01:57.500281    3670 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:01:57.503790    3670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:01:57.520424    3670 start.go:159] libmachine.API.Create for "enable-default-cni-786000" (driver="qemu2")
	I0912 15:01:57.520446    3670 client.go:168] LocalClient.Create starting
	I0912 15:01:57.520505    3670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:01:57.520531    3670 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:57.520542    3670 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:57.520584    3670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:01:57.520604    3670 main.go:141] libmachine: Decoding PEM data...
	I0912 15:01:57.520611    3670 main.go:141] libmachine: Parsing certificate...
	I0912 15:01:57.520934    3670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:01:57.635808    3670 main.go:141] libmachine: Creating SSH key...
	I0912 15:01:57.708158    3670 main.go:141] libmachine: Creating Disk image...
	I0912 15:01:57.708165    3670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:01:57.708299    3670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:01:57.716735    3670 main.go:141] libmachine: STDOUT: 
	I0912 15:01:57.716748    3670 main.go:141] libmachine: STDERR: 
	I0912 15:01:57.716796    3670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2 +20000M
	I0912 15:01:57.723932    3670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:01:57.723961    3670 main.go:141] libmachine: STDERR: 
	I0912 15:01:57.723980    3670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:01:57.723986    3670 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:01:57.724027    3670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1c:47:0c:fd:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:01:57.725535    3670 main.go:141] libmachine: STDOUT: 
	I0912 15:01:57.725549    3670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:01:57.725569    3670 client.go:171] LocalClient.Create took 205.122625ms
	I0912 15:01:59.727764    3670 start.go:128] duration metric: createHost completed in 2.227486584s
	I0912 15:01:59.727915    3670 start.go:83] releasing machines lock for "enable-default-cni-786000", held for 2.227655958s
	W0912 15:01:59.727985    3670 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:59.739132    3670 out.go:177] * Deleting "enable-default-cni-786000" in qemu2 ...
	W0912 15:01:59.761651    3670 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:01:59.761681    3670 start.go:703] Will try again in 5 seconds ...
	I0912 15:02:04.763833    3670 start.go:365] acquiring machines lock for enable-default-cni-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:04.764281    3670 start.go:369] acquired machines lock for "enable-default-cni-786000" in 334.667µs
	I0912 15:02:04.764420    3670 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:04.764689    3670 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:04.771375    3670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:04.817878    3670 start.go:159] libmachine.API.Create for "enable-default-cni-786000" (driver="qemu2")
	I0912 15:02:04.817941    3670 client.go:168] LocalClient.Create starting
	I0912 15:02:04.818051    3670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:04.818115    3670 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:04.818136    3670 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:04.818206    3670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:04.818245    3670 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:04.818266    3670 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:04.818839    3670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:04.947919    3670 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:05.007370    3670 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:05.007376    3670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:05.007523    3670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:02:05.016162    3670 main.go:141] libmachine: STDOUT: 
	I0912 15:02:05.016177    3670 main.go:141] libmachine: STDERR: 
	I0912 15:02:05.016227    3670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2 +20000M
	I0912 15:02:05.023310    3670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:05.023322    3670 main.go:141] libmachine: STDERR: 
	I0912 15:02:05.023333    3670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:02:05.023348    3670 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:05.023385    3670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ca:7c:b6:a6:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/enable-default-cni-786000/disk.qcow2
	I0912 15:02:05.024892    3670 main.go:141] libmachine: STDOUT: 
	I0912 15:02:05.024905    3670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:05.024918    3670 client.go:171] LocalClient.Create took 206.975792ms
	I0912 15:02:07.027134    3670 start.go:128] duration metric: createHost completed in 2.262387833s
	I0912 15:02:07.027198    3670 start.go:83] releasing machines lock for "enable-default-cni-786000", held for 2.262938125s
	W0912 15:02:07.027616    3670 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:07.038185    3670 out.go:177] 
	W0912 15:02:07.042332    3670 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:02:07.042355    3670 out.go:239] * 
	* 
	W0912 15:02:07.044994    3670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:02:07.054303    3670 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.796937292s)

                                                
                                                
-- stdout --
	* [kindnet-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-786000 in cluster kindnet-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:02:09.256164    3785 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:02:09.256286    3785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:09.256289    3785 out.go:309] Setting ErrFile to fd 2...
	I0912 15:02:09.256292    3785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:09.256417    3785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:02:09.257448    3785 out.go:303] Setting JSON to false
	I0912 15:02:09.272585    3785 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1903,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:02:09.272654    3785 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:02:09.277578    3785 out.go:177] * [kindnet-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:02:09.285731    3785 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:02:09.289613    3785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:02:09.285820    3785 notify.go:220] Checking for updates...
	I0912 15:02:09.295684    3785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:02:09.298676    3785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:02:09.301750    3785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:02:09.304745    3785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:02:09.306459    3785 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:02:09.306502    3785 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:02:09.310685    3785 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:02:09.317573    3785 start.go:298] selected driver: qemu2
	I0912 15:02:09.317578    3785 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:02:09.317583    3785 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:02:09.319574    3785 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:02:09.322686    3785 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:02:09.325772    3785 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:02:09.325798    3785 cni.go:84] Creating CNI manager for "kindnet"
	I0912 15:02:09.325805    3785 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 15:02:09.325811    3785 start_flags.go:321] config:
	{Name:kindnet-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:02:09.329989    3785 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:02:09.336810    3785 out.go:177] * Starting control plane node kindnet-786000 in cluster kindnet-786000
	I0912 15:02:09.340736    3785 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:02:09.340758    3785 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:02:09.340770    3785 cache.go:57] Caching tarball of preloaded images
	I0912 15:02:09.340836    3785 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:02:09.340842    3785 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:02:09.340918    3785 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kindnet-786000/config.json ...
	I0912 15:02:09.340932    3785 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kindnet-786000/config.json: {Name:mkf370e1a624bb83cc037be4342f33d1cf26978b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:02:09.341143    3785 start.go:365] acquiring machines lock for kindnet-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:09.341176    3785 start.go:369] acquired machines lock for "kindnet-786000" in 26.792µs
	I0912 15:02:09.341189    3785 start.go:93] Provisioning new machine with config: &{Name:kindnet-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:09.341238    3785 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:09.349754    3785 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:09.366457    3785 start.go:159] libmachine.API.Create for "kindnet-786000" (driver="qemu2")
	I0912 15:02:09.366477    3785 client.go:168] LocalClient.Create starting
	I0912 15:02:09.366531    3785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:09.366555    3785 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:09.366568    3785 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:09.366611    3785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:09.366630    3785 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:09.366641    3785 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:09.366993    3785 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:09.536551    3785 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:09.633185    3785 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:09.633193    3785 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:09.633355    3785 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:09.641826    3785 main.go:141] libmachine: STDOUT: 
	I0912 15:02:09.641842    3785 main.go:141] libmachine: STDERR: 
	I0912 15:02:09.641904    3785 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2 +20000M
	I0912 15:02:09.649016    3785 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:09.649029    3785 main.go:141] libmachine: STDERR: 
	I0912 15:02:09.649044    3785 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:09.649051    3785 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:09.649084    3785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:74:08:62:3b:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:09.650522    3785 main.go:141] libmachine: STDOUT: 
	I0912 15:02:09.650537    3785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:09.650554    3785 client.go:171] LocalClient.Create took 284.078458ms
	I0912 15:02:11.652686    3785 start.go:128] duration metric: createHost completed in 2.311476875s
	I0912 15:02:11.652764    3785 start.go:83] releasing machines lock for "kindnet-786000", held for 2.311624167s
	W0912 15:02:11.652869    3785 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:11.660196    3785 out.go:177] * Deleting "kindnet-786000" in qemu2 ...
	W0912 15:02:11.682096    3785 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:11.682117    3785 start.go:703] Will try again in 5 seconds ...
	I0912 15:02:16.684226    3785 start.go:365] acquiring machines lock for kindnet-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:16.684713    3785 start.go:369] acquired machines lock for "kindnet-786000" in 354.625µs
	I0912 15:02:16.684829    3785 start.go:93] Provisioning new machine with config: &{Name:kindnet-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:16.685079    3785 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:16.693743    3785 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:16.738738    3785 start.go:159] libmachine.API.Create for "kindnet-786000" (driver="qemu2")
	I0912 15:02:16.738787    3785 client.go:168] LocalClient.Create starting
	I0912 15:02:16.738881    3785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:16.738929    3785 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:16.738948    3785 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:16.739050    3785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:16.739089    3785 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:16.739107    3785 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:16.739590    3785 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:16.884662    3785 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:16.962147    3785 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:16.962152    3785 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:16.962290    3785 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:16.970767    3785 main.go:141] libmachine: STDOUT: 
	I0912 15:02:16.970782    3785 main.go:141] libmachine: STDERR: 
	I0912 15:02:16.970846    3785 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2 +20000M
	I0912 15:02:16.977969    3785 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:16.977989    3785 main.go:141] libmachine: STDERR: 
	I0912 15:02:16.978005    3785 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:16.978014    3785 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:16.978051    3785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:c0:65:5b:c4:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kindnet-786000/disk.qcow2
	I0912 15:02:16.979617    3785 main.go:141] libmachine: STDOUT: 
	I0912 15:02:16.979629    3785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:16.979644    3785 client.go:171] LocalClient.Create took 240.85675ms
	I0912 15:02:18.981818    3785 start.go:128] duration metric: createHost completed in 2.296725041s
	I0912 15:02:18.981929    3785 start.go:83] releasing machines lock for "kindnet-786000", held for 2.297236333s
	W0912 15:02:18.982431    3785 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:18.994129    3785 out.go:177] 
	W0912 15:02:18.998160    3785 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:02:18.998187    3785 out.go:239] * 
	* 
	W0912 15:02:19.000792    3785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:02:19.011144    3785 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.7613335s)

                                                
                                                
-- stdout --
	* [bridge-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-786000 in cluster bridge-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:02:21.328774    3904 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:02:21.328929    3904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:21.328932    3904 out.go:309] Setting ErrFile to fd 2...
	I0912 15:02:21.328934    3904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:21.329206    3904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:02:21.330377    3904 out.go:303] Setting JSON to false
	I0912 15:02:21.345786    3904 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1915,"bootTime":1694554226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:02:21.345849    3904 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:02:21.351503    3904 out.go:177] * [bridge-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:02:21.359504    3904 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:02:21.363524    3904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:02:21.359559    3904 notify.go:220] Checking for updates...
	I0912 15:02:21.366515    3904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:02:21.369562    3904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:02:21.372475    3904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:02:21.375523    3904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:02:21.378914    3904 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:02:21.378956    3904 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:02:21.383491    3904 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:02:21.390566    3904 start.go:298] selected driver: qemu2
	I0912 15:02:21.390571    3904 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:02:21.390577    3904 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:02:21.392571    3904 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:02:21.395475    3904 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:02:21.398601    3904 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:02:21.398629    3904 cni.go:84] Creating CNI manager for "bridge"
	I0912 15:02:21.398633    3904 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:02:21.398639    3904 start_flags.go:321] config:
	{Name:bridge-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0912 15:02:21.402912    3904 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:02:21.414499    3904 out.go:177] * Starting control plane node bridge-786000 in cluster bridge-786000
	I0912 15:02:21.418475    3904 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:02:21.418494    3904 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:02:21.418503    3904 cache.go:57] Caching tarball of preloaded images
	I0912 15:02:21.418560    3904 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:02:21.418565    3904 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:02:21.418626    3904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/bridge-786000/config.json ...
	I0912 15:02:21.418639    3904 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/bridge-786000/config.json: {Name:mk6497f95c67300543107e64eef2a2bb84867338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:02:21.418860    3904 start.go:365] acquiring machines lock for bridge-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:21.418894    3904 start.go:369] acquired machines lock for "bridge-786000" in 26.75µs
	I0912 15:02:21.418906    3904 start.go:93] Provisioning new machine with config: &{Name:bridge-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:21.418945    3904 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:21.427488    3904 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:21.444964    3904 start.go:159] libmachine.API.Create for "bridge-786000" (driver="qemu2")
	I0912 15:02:21.444988    3904 client.go:168] LocalClient.Create starting
	I0912 15:02:21.445064    3904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:21.445092    3904 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:21.445105    3904 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:21.445153    3904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:21.445172    3904 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:21.445188    3904 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:21.445567    3904 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:21.568932    3904 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:21.649422    3904 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:21.649427    3904 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:21.649569    3904 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:21.658098    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:02:21.658113    3904 main.go:141] libmachine: STDERR: 
	I0912 15:02:21.658174    3904 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2 +20000M
	I0912 15:02:21.665354    3904 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:21.665368    3904 main.go:141] libmachine: STDERR: 
	I0912 15:02:21.665381    3904 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:21.665386    3904 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:21.665420    3904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:41:0f:75:2f:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:21.666978    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:02:21.666993    3904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:21.667010    3904 client.go:171] LocalClient.Create took 222.02075ms
	I0912 15:02:23.669147    3904 start.go:128] duration metric: createHost completed in 2.250228708s
	I0912 15:02:23.669217    3904 start.go:83] releasing machines lock for "bridge-786000", held for 2.250357792s
	W0912 15:02:23.669274    3904 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:23.676545    3904 out.go:177] * Deleting "bridge-786000" in qemu2 ...
	W0912 15:02:23.696483    3904 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:23.696508    3904 start.go:703] Will try again in 5 seconds ...
	I0912 15:02:28.698625    3904 start.go:365] acquiring machines lock for bridge-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:28.699063    3904 start.go:369] acquired machines lock for "bridge-786000" in 331.959µs
	I0912 15:02:28.699177    3904 start.go:93] Provisioning new machine with config: &{Name:bridge-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:28.699428    3904 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:28.705069    3904 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:28.749041    3904 start.go:159] libmachine.API.Create for "bridge-786000" (driver="qemu2")
	I0912 15:02:28.749079    3904 client.go:168] LocalClient.Create starting
	I0912 15:02:28.749196    3904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:28.749259    3904 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:28.749295    3904 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:28.749366    3904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:28.749408    3904 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:28.749424    3904 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:28.749909    3904 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:28.885124    3904 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:29.001465    3904 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:29.001477    3904 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:29.001627    3904 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:29.010071    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:02:29.010087    3904 main.go:141] libmachine: STDERR: 
	I0912 15:02:29.010142    3904 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2 +20000M
	I0912 15:02:29.017396    3904 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:29.017420    3904 main.go:141] libmachine: STDERR: 
	I0912 15:02:29.017440    3904 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:29.017450    3904 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:29.017483    3904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:1b:95:6c:01:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/bridge-786000/disk.qcow2
	I0912 15:02:29.019005    3904 main.go:141] libmachine: STDOUT: 
	I0912 15:02:29.019018    3904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:29.019031    3904 client.go:171] LocalClient.Create took 269.950958ms
	I0912 15:02:31.021179    3904 start.go:128] duration metric: createHost completed in 2.321745583s
	I0912 15:02:31.021246    3904 start.go:83] releasing machines lock for "bridge-786000", held for 2.322199042s
	W0912 15:02:31.021738    3904 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:31.032534    3904 out.go:177] 
	W0912 15:02:31.036523    3904 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:02:31.036553    3904 out.go:239] * 
	* 
	W0912 15:02:31.039033    3904 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:02:31.048386    3904 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.675004542s)

                                                
                                                
-- stdout --
	* [kubenet-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-786000 in cluster kubenet-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:02:33.256248    4014 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:02:33.256378    4014 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:33.256381    4014 out.go:309] Setting ErrFile to fd 2...
	I0912 15:02:33.256383    4014 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:33.256513    4014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:02:33.257530    4014 out.go:303] Setting JSON to false
	I0912 15:02:33.272517    4014 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1927,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:02:33.272600    4014 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:02:33.277651    4014 out.go:177] * [kubenet-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:02:33.287625    4014 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:02:33.283701    4014 notify.go:220] Checking for updates...
	I0912 15:02:33.295549    4014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:02:33.303539    4014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:02:33.306615    4014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:02:33.309663    4014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:02:33.312630    4014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:02:33.316044    4014 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:02:33.316086    4014 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:02:33.320581    4014 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:02:33.327597    4014 start.go:298] selected driver: qemu2
	I0912 15:02:33.327601    4014 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:02:33.327607    4014 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:02:33.329647    4014 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:02:33.332619    4014 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:02:33.335738    4014 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:02:33.335771    4014 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0912 15:02:33.335775    4014 start_flags.go:321] config:
	{Name:kubenet-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0912 15:02:33.340207    4014 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:02:33.347624    4014 out.go:177] * Starting control plane node kubenet-786000 in cluster kubenet-786000
	I0912 15:02:33.351585    4014 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:02:33.351607    4014 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:02:33.351619    4014 cache.go:57] Caching tarball of preloaded images
	I0912 15:02:33.351694    4014 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:02:33.351700    4014 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:02:33.351777    4014 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kubenet-786000/config.json ...
	I0912 15:02:33.351791    4014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/kubenet-786000/config.json: {Name:mk04f941d0c4e68aebbf4cc7f78b55a3e914470d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:02:33.352029    4014 start.go:365] acquiring machines lock for kubenet-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:33.352065    4014 start.go:369] acquired machines lock for "kubenet-786000" in 28.916µs
	I0912 15:02:33.352079    4014 start.go:93] Provisioning new machine with config: &{Name:kubenet-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:33.352111    4014 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:33.359665    4014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:33.377416    4014 start.go:159] libmachine.API.Create for "kubenet-786000" (driver="qemu2")
	I0912 15:02:33.377441    4014 client.go:168] LocalClient.Create starting
	I0912 15:02:33.377503    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:33.377531    4014 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:33.377543    4014 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:33.377587    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:33.377607    4014 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:33.377615    4014 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:33.378039    4014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:33.495660    4014 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:33.553519    4014 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:33.553524    4014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:33.553663    4014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:33.562199    4014 main.go:141] libmachine: STDOUT: 
	I0912 15:02:33.562212    4014 main.go:141] libmachine: STDERR: 
	I0912 15:02:33.562256    4014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2 +20000M
	I0912 15:02:33.569359    4014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:33.569372    4014 main.go:141] libmachine: STDERR: 
	I0912 15:02:33.569386    4014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:33.569393    4014 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:33.569433    4014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:72:74:44:c4:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:33.570902    4014 main.go:141] libmachine: STDOUT: 
	I0912 15:02:33.570916    4014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:33.570933    4014 client.go:171] LocalClient.Create took 193.488791ms
	I0912 15:02:35.573071    4014 start.go:128] duration metric: createHost completed in 2.220981125s
	I0912 15:02:35.573143    4014 start.go:83] releasing machines lock for "kubenet-786000", held for 2.221112416s
	W0912 15:02:35.573206    4014 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:35.584449    4014 out.go:177] * Deleting "kubenet-786000" in qemu2 ...
	W0912 15:02:35.603775    4014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:35.603807    4014 start.go:703] Will try again in 5 seconds ...
	I0912 15:02:40.605980    4014 start.go:365] acquiring machines lock for kubenet-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:40.606577    4014 start.go:369] acquired machines lock for "kubenet-786000" in 445.25µs
	I0912 15:02:40.606746    4014 start.go:93] Provisioning new machine with config: &{Name:kubenet-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:40.607053    4014 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:40.611859    4014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:40.658620    4014 start.go:159] libmachine.API.Create for "kubenet-786000" (driver="qemu2")
	I0912 15:02:40.658664    4014 client.go:168] LocalClient.Create starting
	I0912 15:02:40.658783    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:40.658845    4014 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:40.658862    4014 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:40.658927    4014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:40.658971    4014 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:40.658986    4014 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:40.659593    4014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:40.786517    4014 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:40.843629    4014 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:40.843637    4014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:40.843773    4014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:40.852111    4014 main.go:141] libmachine: STDOUT: 
	I0912 15:02:40.852125    4014 main.go:141] libmachine: STDERR: 
	I0912 15:02:40.852187    4014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2 +20000M
	I0912 15:02:40.859358    4014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:40.859382    4014 main.go:141] libmachine: STDERR: 
	I0912 15:02:40.859397    4014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:40.859411    4014 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:40.859453    4014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:86:02:a2:2d:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/kubenet-786000/disk.qcow2
	I0912 15:02:40.861011    4014 main.go:141] libmachine: STDOUT: 
	I0912 15:02:40.861023    4014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:40.861038    4014 client.go:171] LocalClient.Create took 202.3725ms
	I0912 15:02:42.863171    4014 start.go:128] duration metric: createHost completed in 2.256135792s
	I0912 15:02:42.863241    4014 start.go:83] releasing machines lock for "kubenet-786000", held for 2.256658959s
	W0912 15:02:42.863647    4014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:42.874420    4014 out.go:177] 
	W0912 15:02:42.878453    4014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:02:42.878494    4014 out.go:239] * 
	* 
	W0912 15:02:42.881242    4014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:02:42.890374    4014 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.652850917s)

                                                
                                                
-- stdout --
	* [custom-flannel-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-786000 in cluster custom-flannel-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:02:45.086786    4126 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:02:45.087127    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:45.087131    4126 out.go:309] Setting ErrFile to fd 2...
	I0912 15:02:45.087134    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:45.087323    4126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:02:45.088685    4126 out.go:303] Setting JSON to false
	I0912 15:02:45.104059    4126 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1939,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:02:45.104124    4126 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:02:45.109426    4126 out.go:177] * [custom-flannel-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:02:45.117383    4126 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:02:45.121379    4126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:02:45.117448    4126 notify.go:220] Checking for updates...
	I0912 15:02:45.127387    4126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:02:45.130402    4126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:02:45.133304    4126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:02:45.136374    4126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:02:45.139850    4126 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:02:45.139901    4126 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:02:45.144346    4126 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:02:45.151372    4126 start.go:298] selected driver: qemu2
	I0912 15:02:45.151377    4126 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:02:45.151383    4126 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:02:45.153391    4126 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:02:45.156312    4126 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:02:45.159383    4126 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:02:45.159407    4126 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0912 15:02:45.159418    4126 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0912 15:02:45.159425    4126 start_flags.go:321] config:
	{Name:custom-flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:02:45.163811    4126 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:02:45.171397    4126 out.go:177] * Starting control plane node custom-flannel-786000 in cluster custom-flannel-786000
	I0912 15:02:45.175410    4126 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:02:45.175435    4126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:02:45.175453    4126 cache.go:57] Caching tarball of preloaded images
	I0912 15:02:45.175519    4126 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:02:45.175525    4126 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:02:45.175595    4126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/custom-flannel-786000/config.json ...
	I0912 15:02:45.175607    4126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/custom-flannel-786000/config.json: {Name:mk6144717dd9ae22774a9279069fece0afe5737e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:02:45.175825    4126 start.go:365] acquiring machines lock for custom-flannel-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:45.175861    4126 start.go:369] acquired machines lock for "custom-flannel-786000" in 24.958µs
	I0912 15:02:45.175873    4126 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:45.175918    4126 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:45.184402    4126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:45.201314    4126 start.go:159] libmachine.API.Create for "custom-flannel-786000" (driver="qemu2")
	I0912 15:02:45.201336    4126 client.go:168] LocalClient.Create starting
	I0912 15:02:45.201401    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:45.201429    4126 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:45.201445    4126 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:45.201487    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:45.201506    4126 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:45.201516    4126 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:45.201872    4126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:45.317960    4126 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:45.373372    4126 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:45.373377    4126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:45.373511    4126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:45.382100    4126 main.go:141] libmachine: STDOUT: 
	I0912 15:02:45.382111    4126 main.go:141] libmachine: STDERR: 
	I0912 15:02:45.382159    4126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2 +20000M
	I0912 15:02:45.389313    4126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:45.389326    4126 main.go:141] libmachine: STDERR: 
	I0912 15:02:45.389339    4126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:45.389348    4126 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:45.389380    4126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:56:56:da:9a:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:45.390863    4126 main.go:141] libmachine: STDOUT: 
	I0912 15:02:45.390875    4126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:45.390891    4126 client.go:171] LocalClient.Create took 189.552ms
	I0912 15:02:47.393035    4126 start.go:128] duration metric: createHost completed in 2.217138417s
	I0912 15:02:47.393100    4126 start.go:83] releasing machines lock for "custom-flannel-786000", held for 2.217272583s
	W0912 15:02:47.393189    4126 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:47.405357    4126 out.go:177] * Deleting "custom-flannel-786000" in qemu2 ...
	W0912 15:02:47.425858    4126 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:47.425883    4126 start.go:703] Will try again in 5 seconds ...
	I0912 15:02:52.427974    4126 start.go:365] acquiring machines lock for custom-flannel-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:52.428490    4126 start.go:369] acquired machines lock for "custom-flannel-786000" in 402.542µs
	I0912 15:02:52.428658    4126 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:52.428921    4126 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:52.437662    4126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:52.485158    4126 start.go:159] libmachine.API.Create for "custom-flannel-786000" (driver="qemu2")
	I0912 15:02:52.485207    4126 client.go:168] LocalClient.Create starting
	I0912 15:02:52.485329    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:52.485388    4126 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:52.485406    4126 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:52.485466    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:52.485503    4126 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:52.485515    4126 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:52.485994    4126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:52.618916    4126 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:52.650538    4126 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:52.650543    4126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:52.650690    4126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:52.659273    4126 main.go:141] libmachine: STDOUT: 
	I0912 15:02:52.659287    4126 main.go:141] libmachine: STDERR: 
	I0912 15:02:52.659346    4126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2 +20000M
	I0912 15:02:52.666450    4126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:52.666462    4126 main.go:141] libmachine: STDERR: 
	I0912 15:02:52.666474    4126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:52.666481    4126 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:52.666520    4126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:f0:fa:3c:a4:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/custom-flannel-786000/disk.qcow2
	I0912 15:02:52.668026    4126 main.go:141] libmachine: STDOUT: 
	I0912 15:02:52.668039    4126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:52.668060    4126 client.go:171] LocalClient.Create took 182.842958ms
	I0912 15:02:54.670194    4126 start.go:128] duration metric: createHost completed in 2.241284375s
	I0912 15:02:54.670267    4126 start.go:83] releasing machines lock for "custom-flannel-786000", held for 2.24179225s
	W0912 15:02:54.670672    4126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:54.681281    4126 out.go:177] 
	W0912 15:02:54.685379    4126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:02:54.685404    4126 out.go:239] * 
	* 
	W0912 15:02:54.688006    4126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:02:54.698370    4126 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.700513209s)

                                                
                                                
-- stdout --
	* [calico-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-786000 in cluster calico-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:02:57.083436    4245 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:02:57.083580    4245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:57.083583    4245 out.go:309] Setting ErrFile to fd 2...
	I0912 15:02:57.083586    4245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:02:57.083708    4245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:02:57.084700    4245 out.go:303] Setting JSON to false
	I0912 15:02:57.099705    4245 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1951,"bootTime":1694554226,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:02:57.099811    4245 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:02:57.105401    4245 out.go:177] * [calico-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:02:57.112344    4245 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:02:57.116393    4245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:02:57.112415    4245 notify.go:220] Checking for updates...
	I0912 15:02:57.122392    4245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:02:57.125424    4245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:02:57.128412    4245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:02:57.129727    4245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:02:57.132829    4245 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:02:57.132881    4245 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:02:57.137445    4245 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:02:57.142370    4245 start.go:298] selected driver: qemu2
	I0912 15:02:57.142374    4245 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:02:57.142379    4245 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:02:57.144313    4245 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:02:57.147510    4245 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:02:57.150526    4245 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:02:57.150547    4245 cni.go:84] Creating CNI manager for "calico"
	I0912 15:02:57.150551    4245 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0912 15:02:57.150558    4245 start_flags.go:321] config:
	{Name:calico-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0912 15:02:57.154648    4245 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:02:57.157428    4245 out.go:177] * Starting control plane node calico-786000 in cluster calico-786000
	I0912 15:02:57.165394    4245 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:02:57.165414    4245 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:02:57.165425    4245 cache.go:57] Caching tarball of preloaded images
	I0912 15:02:57.165488    4245 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:02:57.165500    4245 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:02:57.165572    4245 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/calico-786000/config.json ...
	I0912 15:02:57.165589    4245 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/calico-786000/config.json: {Name:mk330b8812dec77f0265ed57397bf095b7f05bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:02:57.165795    4245 start.go:365] acquiring machines lock for calico-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:02:57.165827    4245 start.go:369] acquired machines lock for "calico-786000" in 25.417µs
	I0912 15:02:57.165838    4245 start.go:93] Provisioning new machine with config: &{Name:calico-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:02:57.165865    4245 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:02:57.174372    4245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:02:57.190435    4245 start.go:159] libmachine.API.Create for "calico-786000" (driver="qemu2")
	I0912 15:02:57.190456    4245 client.go:168] LocalClient.Create starting
	I0912 15:02:57.190511    4245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:02:57.190535    4245 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:57.190545    4245 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:57.190581    4245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:02:57.190601    4245 main.go:141] libmachine: Decoding PEM data...
	I0912 15:02:57.190612    4245 main.go:141] libmachine: Parsing certificate...
	I0912 15:02:57.190949    4245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:02:57.305134    4245 main.go:141] libmachine: Creating SSH key...
	I0912 15:02:57.379309    4245 main.go:141] libmachine: Creating Disk image...
	I0912 15:02:57.379317    4245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:02:57.379452    4245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:02:57.387887    4245 main.go:141] libmachine: STDOUT: 
	I0912 15:02:57.387901    4245 main.go:141] libmachine: STDERR: 
	I0912 15:02:57.387955    4245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2 +20000M
	I0912 15:02:57.395049    4245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:02:57.395061    4245 main.go:141] libmachine: STDERR: 
	I0912 15:02:57.395078    4245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:02:57.395083    4245 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:02:57.395120    4245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:2f:ab:4d:30:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:02:57.396603    4245 main.go:141] libmachine: STDOUT: 
	I0912 15:02:57.396616    4245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:02:57.396632    4245 client.go:171] LocalClient.Create took 206.176042ms
	I0912 15:02:59.398809    4245 start.go:128] duration metric: createHost completed in 2.232960167s
	I0912 15:02:59.398893    4245 start.go:83] releasing machines lock for "calico-786000", held for 2.233099208s
	W0912 15:02:59.398988    4245 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:59.406620    4245 out.go:177] * Deleting "calico-786000" in qemu2 ...
	W0912 15:02:59.425969    4245 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:02:59.426002    4245 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:04.428146    4245 start.go:365] acquiring machines lock for calico-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:04.428663    4245 start.go:369] acquired machines lock for "calico-786000" in 323.708µs
	I0912 15:03:04.428767    4245 start.go:93] Provisioning new machine with config: &{Name:calico-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:04.429041    4245 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:04.434605    4245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:03:04.478289    4245 start.go:159] libmachine.API.Create for "calico-786000" (driver="qemu2")
	I0912 15:03:04.478335    4245 client.go:168] LocalClient.Create starting
	I0912 15:03:04.478433    4245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:04.478486    4245 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:04.478503    4245 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:04.478565    4245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:04.478607    4245 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:04.478620    4245 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:04.479121    4245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:04.615864    4245 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:04.698813    4245 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:04.698818    4245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:04.698962    4245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:03:04.707349    4245 main.go:141] libmachine: STDOUT: 
	I0912 15:03:04.707365    4245 main.go:141] libmachine: STDERR: 
	I0912 15:03:04.707412    4245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2 +20000M
	I0912 15:03:04.714532    4245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:04.714548    4245 main.go:141] libmachine: STDERR: 
	I0912 15:03:04.714562    4245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:03:04.714567    4245 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:04.714603    4245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:19:53:be:b4:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/calico-786000/disk.qcow2
	I0912 15:03:04.716070    4245 main.go:141] libmachine: STDOUT: 
	I0912 15:03:04.716084    4245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:04.716103    4245 client.go:171] LocalClient.Create took 237.768375ms
	I0912 15:03:06.718285    4245 start.go:128] duration metric: createHost completed in 2.289228375s
	I0912 15:03:06.718368    4245 start.go:83] releasing machines lock for "calico-786000", held for 2.289724625s
	W0912 15:03:06.718869    4245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:06.728498    4245 out.go:177] 
	W0912 15:03:06.732541    4245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:06.732566    4245 out.go:239] * 
	* 
	W0912 15:03:06.735052    4245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:06.742484    4245 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-786000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.773679875s)

                                                
                                                
-- stdout --
	* [false-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-786000 in cluster false-786000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:09.137372    4366 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:09.137507    4366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:09.137511    4366 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:09.137514    4366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:09.137670    4366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:09.138673    4366 out.go:303] Setting JSON to false
	I0912 15:03:09.154050    4366 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1963,"bootTime":1694554226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:09.154147    4366 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:09.159221    4366 out.go:177] * [false-786000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:09.167241    4366 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:09.167300    4366 notify.go:220] Checking for updates...
	I0912 15:03:09.172228    4366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:09.175287    4366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:09.178235    4366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:09.181206    4366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:09.184258    4366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:09.187628    4366 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:09.187671    4366 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:09.192170    4366 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:09.198107    4366 start.go:298] selected driver: qemu2
	I0912 15:03:09.198111    4366 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:09.198116    4366 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:09.200062    4366 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:09.203217    4366 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:09.206305    4366 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:09.206330    4366 cni.go:84] Creating CNI manager for "false"
	I0912 15:03:09.206335    4366 start_flags.go:321] config:
	{Name:false-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0912 15:03:09.210491    4366 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:09.217193    4366 out.go:177] * Starting control plane node false-786000 in cluster false-786000
	I0912 15:03:09.221201    4366 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:09.221222    4366 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:09.221237    4366 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:09.221310    4366 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:09.221323    4366 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:09.221395    4366 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/false-786000/config.json ...
	I0912 15:03:09.221411    4366 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/false-786000/config.json: {Name:mk24a0c043931ec2919c7ff8404b33283889954c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:09.221612    4366 start.go:365] acquiring machines lock for false-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:09.221644    4366 start.go:369] acquired machines lock for "false-786000" in 26.125µs
	I0912 15:03:09.221655    4366 start.go:93] Provisioning new machine with config: &{Name:false-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:09.221688    4366 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:09.230219    4366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:03:09.246345    4366 start.go:159] libmachine.API.Create for "false-786000" (driver="qemu2")
	I0912 15:03:09.246371    4366 client.go:168] LocalClient.Create starting
	I0912 15:03:09.246438    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:09.246465    4366 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:09.246475    4366 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:09.246520    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:09.246540    4366 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:09.246549    4366 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:09.246909    4366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:09.362613    4366 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:09.480612    4366 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:09.480622    4366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:09.480774    4366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:09.489725    4366 main.go:141] libmachine: STDOUT: 
	I0912 15:03:09.489743    4366 main.go:141] libmachine: STDERR: 
	I0912 15:03:09.489806    4366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2 +20000M
	I0912 15:03:09.496963    4366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:09.496974    4366 main.go:141] libmachine: STDERR: 
	I0912 15:03:09.496985    4366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:09.496989    4366 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:09.497028    4366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:69:75:26:dd:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:09.498616    4366 main.go:141] libmachine: STDOUT: 
	I0912 15:03:09.498628    4366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:09.498645    4366 client.go:171] LocalClient.Create took 252.275292ms
	I0912 15:03:11.500774    4366 start.go:128] duration metric: createHost completed in 2.279102458s
	I0912 15:03:11.500838    4366 start.go:83] releasing machines lock for "false-786000", held for 2.27923025s
	W0912 15:03:11.500923    4366 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:11.509182    4366 out.go:177] * Deleting "false-786000" in qemu2 ...
	W0912 15:03:11.529271    4366 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:11.529304    4366 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:16.531439    4366 start.go:365] acquiring machines lock for false-786000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:16.531858    4366 start.go:369] acquired machines lock for "false-786000" in 298.584µs
	I0912 15:03:16.531969    4366 start.go:93] Provisioning new machine with config: &{Name:false-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:16.532277    4366 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:16.541912    4366 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0912 15:03:16.589919    4366 start.go:159] libmachine.API.Create for "false-786000" (driver="qemu2")
	I0912 15:03:16.589950    4366 client.go:168] LocalClient.Create starting
	I0912 15:03:16.590079    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:16.590137    4366 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:16.590153    4366 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:16.590218    4366 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:16.590253    4366 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:16.590268    4366 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:16.590725    4366 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:16.724432    4366 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:16.822121    4366 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:16.822126    4366 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:16.822276    4366 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:16.830838    4366 main.go:141] libmachine: STDOUT: 
	I0912 15:03:16.830859    4366 main.go:141] libmachine: STDERR: 
	I0912 15:03:16.830924    4366 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2 +20000M
	I0912 15:03:16.838074    4366 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:16.838085    4366 main.go:141] libmachine: STDERR: 
	I0912 15:03:16.838099    4366 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:16.838105    4366 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:16.838147    4366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:21:50:50:54:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/false-786000/disk.qcow2
	I0912 15:03:16.839619    4366 main.go:141] libmachine: STDOUT: 
	I0912 15:03:16.839633    4366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:16.839647    4366 client.go:171] LocalClient.Create took 249.697209ms
	I0912 15:03:18.841861    4366 start.go:128] duration metric: createHost completed in 2.3095905s
	I0912 15:03:18.841921    4366 start.go:83] releasing machines lock for "false-786000", held for 2.310081167s
	W0912 15:03:18.842393    4366 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:18.853025    4366 out.go:177] 
	W0912 15:03:18.857096    4366 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:18.857136    4366 out.go:239] * 
	* 
	W0912 15:03:18.859823    4366 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:18.870067    4366 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.45860675s)

                                                
                                                
-- stdout --
	* [old-k8s-version-128000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-128000 in cluster old-k8s-version-128000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-128000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:21.045267    4479 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:21.045395    4479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:21.045398    4479 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:21.045401    4479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:21.045509    4479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:21.046555    4479 out.go:303] Setting JSON to false
	I0912 15:03:21.061935    4479 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1975,"bootTime":1694554226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:21.062023    4479 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:21.067500    4479 out.go:177] * [old-k8s-version-128000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:21.075377    4479 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:21.075406    4479 notify.go:220] Checking for updates...
	I0912 15:03:21.080817    4479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:21.083367    4479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:21.086439    4479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:21.089458    4479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:21.092391    4479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:21.095783    4479 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:21.095838    4479 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:21.100410    4479 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:21.107423    4479 start.go:298] selected driver: qemu2
	I0912 15:03:21.107433    4479 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:21.107438    4479 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:21.109374    4479 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:21.112405    4479 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:21.115552    4479 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:21.115580    4479 cni.go:84] Creating CNI manager for ""
	I0912 15:03:21.115589    4479 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:03:21.115593    4479 start_flags.go:321] config:
	{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:21.119683    4479 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:21.126263    4479 out.go:177] * Starting control plane node old-k8s-version-128000 in cluster old-k8s-version-128000
	I0912 15:03:21.130392    4479 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 15:03:21.130411    4479 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 15:03:21.130421    4479 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:21.130482    4479 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:21.130488    4479 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 15:03:21.130566    4479 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/old-k8s-version-128000/config.json ...
	I0912 15:03:21.130579    4479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/old-k8s-version-128000/config.json: {Name:mk0ab5e50c7ddbfe707747d5eccf270b4d09305c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:21.130790    4479 start.go:365] acquiring machines lock for old-k8s-version-128000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:21.130825    4479 start.go:369] acquired machines lock for "old-k8s-version-128000" in 24.917µs
	I0912 15:03:21.130836    4479 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:21.130884    4479 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:21.135383    4479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:21.151505    4479 start.go:159] libmachine.API.Create for "old-k8s-version-128000" (driver="qemu2")
	I0912 15:03:21.151526    4479 client.go:168] LocalClient.Create starting
	I0912 15:03:21.151583    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:21.151609    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:21.151623    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:21.151662    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:21.151680    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:21.151691    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:21.152046    4479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:21.268352    4479 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:21.330785    4479 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:21.330790    4479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:21.330928    4479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.339378    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:21.339404    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:21.339459    4479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2 +20000M
	I0912 15:03:21.346844    4479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:21.346855    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:21.346877    4479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.346883    4479 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:21.346917    4479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ae:0b:67:3d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.348462    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:21.348477    4479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:21.348493    4479 client.go:171] LocalClient.Create took 196.966291ms
	I0912 15:03:23.350667    4479 start.go:128] duration metric: createHost completed in 2.219803875s
	I0912 15:03:23.350730    4479 start.go:83] releasing machines lock for "old-k8s-version-128000", held for 2.219939959s
	W0912 15:03:23.350782    4479 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:23.359077    4479 out.go:177] * Deleting "old-k8s-version-128000" in qemu2 ...
	W0912 15:03:23.379414    4479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:23.379442    4479 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:28.381462    4479 start.go:365] acquiring machines lock for old-k8s-version-128000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:30.152404    4479 start.go:369] acquired machines lock for "old-k8s-version-128000" in 1.770942042s
	I0912 15:03:30.152540    4479 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:30.152755    4479 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:30.162372    4479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:30.209939    4479 start.go:159] libmachine.API.Create for "old-k8s-version-128000" (driver="qemu2")
	I0912 15:03:30.209986    4479 client.go:168] LocalClient.Create starting
	I0912 15:03:30.210114    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:30.210170    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:30.210196    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:30.210253    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:30.210298    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:30.210312    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:30.210786    4479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:30.338442    4479 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:30.419793    4479 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:30.419801    4479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:30.419955    4479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:30.429004    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:30.429018    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:30.429065    4479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2 +20000M
	I0912 15:03:30.436296    4479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:30.436309    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:30.436324    4479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:30.436329    4479 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:30.436371    4479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a7:bf:48:c3:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:30.437837    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:30.437853    4479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:30.437866    4479 client.go:171] LocalClient.Create took 227.881375ms
	I0912 15:03:32.438024    4479 start.go:128] duration metric: createHost completed in 2.285316s
	I0912 15:03:32.438103    4479 start.go:83] releasing machines lock for "old-k8s-version-128000", held for 2.285714083s
	W0912 15:03:32.438548    4479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:32.443249    4479 out.go:177] 
	W0912 15:03:32.449362    4479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:32.449392    4479 out.go:239] * 
	* 
	W0912 15:03:32.451967    4479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:32.462211    4479 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (67.270208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (3.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe: permission denied (7.78825ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe: permission denied (7.185125ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe start -p stopped-upgrade-333000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe: permission denied (7.197625ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1182113001.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (3.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-333000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-333000: exit status 85 (112.751625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo docker                         | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo cat                            | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo                                | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo find                           | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-786000 sudo crio                           | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p calico-786000                                     | calico-786000          | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT | 12 Sep 23 15:03 PDT |
	| start   | -p false-786000 --memory=3072                        | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo crictl                          | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo crictl ps                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | --all                                                |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo find                            | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo ip a s                          | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	| ssh     | -p false-786000 sudo ip r s                          | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	| ssh     | -p false-786000 sudo                                 | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo iptables                        | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | -t nat -L -n -v                                      |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | status kubelet --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cat kubelet --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo                                 | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo docker                          | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo                                 | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo cat                             | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo                                 | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo systemctl                       | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo find                            | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-786000 sudo crio                            | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p false-786000                                      | false-786000           | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT | 12 Sep 23 15:03 PDT |
	| start   | -p old-k8s-version-128000                            | old-k8s-version-128000 | jenkins | v1.31.2 | 12 Sep 23 15:03 PDT |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 15:03:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 15:03:21.045267    4479 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:21.045395    4479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:21.045398    4479 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:21.045401    4479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:21.045509    4479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:21.046555    4479 out.go:303] Setting JSON to false
	I0912 15:03:21.061935    4479 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1975,"bootTime":1694554226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:21.062023    4479 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:21.067500    4479 out.go:177] * [old-k8s-version-128000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:21.075377    4479 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:21.075406    4479 notify.go:220] Checking for updates...
	I0912 15:03:21.080817    4479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:21.083367    4479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:21.086439    4479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:21.089458    4479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:21.092391    4479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:21.095783    4479 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:21.095838    4479 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:21.100410    4479 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:21.107423    4479 start.go:298] selected driver: qemu2
	I0912 15:03:21.107433    4479 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:21.107438    4479 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:21.109374    4479 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:21.112405    4479 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:21.115552    4479 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:21.115580    4479 cni.go:84] Creating CNI manager for ""
	I0912 15:03:21.115589    4479 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:03:21.115593    4479 start_flags.go:321] config:
	{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:21.119683    4479 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:21.126263    4479 out.go:177] * Starting control plane node old-k8s-version-128000 in cluster old-k8s-version-128000
	I0912 15:03:21.130392    4479 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 15:03:21.130411    4479 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 15:03:21.130421    4479 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:21.130482    4479 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:21.130488    4479 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 15:03:21.130566    4479 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/old-k8s-version-128000/config.json ...
	I0912 15:03:21.130579    4479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/old-k8s-version-128000/config.json: {Name:mk0ab5e50c7ddbfe707747d5eccf270b4d09305c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:21.130790    4479 start.go:365] acquiring machines lock for old-k8s-version-128000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:21.130825    4479 start.go:369] acquired machines lock for "old-k8s-version-128000" in 24.917µs
	I0912 15:03:21.130836    4479 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:21.130884    4479 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:21.135383    4479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:21.151505    4479 start.go:159] libmachine.API.Create for "old-k8s-version-128000" (driver="qemu2")
	I0912 15:03:21.151526    4479 client.go:168] LocalClient.Create starting
	I0912 15:03:21.151583    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:21.151609    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:21.151623    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:21.151662    4479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:21.151680    4479 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:21.151691    4479 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:21.152046    4479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:21.268352    4479 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:21.330785    4479 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:21.330790    4479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:21.330928    4479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.339378    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:21.339404    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:21.339459    4479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2 +20000M
	I0912 15:03:21.346844    4479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:21.346855    4479 main.go:141] libmachine: STDERR: 
	I0912 15:03:21.346877    4479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.346883    4479 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:21.346917    4479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ae:0b:67:3d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:21.348462    4479 main.go:141] libmachine: STDOUT: 
	I0912 15:03:21.348477    4479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:21.348493    4479 client.go:171] LocalClient.Create took 196.966291ms
	I0912 15:03:23.350667    4479 start.go:128] duration metric: createHost completed in 2.219803875s
	I0912 15:03:23.350730    4479 start.go:83] releasing machines lock for "old-k8s-version-128000", held for 2.219939959s
	W0912 15:03:23.350782    4479 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:23.359077    4479 out.go:177] * Deleting "old-k8s-version-128000" in qemu2 ...
	W0912 15:03:23.379414    4479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:23.379442    4479 start.go:703] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-333000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-333000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.821073083s)

                                                
                                                
-- stdout --
	* [no-preload-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-981000 in cluster no-preload-981000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:27.852297    4508 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:27.852432    4508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:27.852435    4508 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:27.852438    4508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:27.852573    4508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:27.853553    4508 out.go:303] Setting JSON to false
	I0912 15:03:27.868800    4508 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1981,"bootTime":1694554226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:27.868878    4508 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:27.874017    4508 out.go:177] * [no-preload-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:27.880999    4508 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:27.881077    4508 notify.go:220] Checking for updates...
	I0912 15:03:27.885042    4508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:27.888004    4508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:27.891984    4508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:27.896038    4508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:27.898991    4508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:27.902279    4508 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:27.902346    4508 config.go:182] Loaded profile config "old-k8s-version-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0912 15:03:27.902384    4508 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:27.906915    4508 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:27.913924    4508 start.go:298] selected driver: qemu2
	I0912 15:03:27.913928    4508 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:27.913935    4508 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:27.915967    4508 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:27.918960    4508 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:27.922048    4508 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:27.922066    4508 cni.go:84] Creating CNI manager for ""
	I0912 15:03:27.922075    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:27.922079    4508 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:03:27.922085    4508 start_flags.go:321] config:
	{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:27.926342    4508 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.932981    4508 out.go:177] * Starting control plane node no-preload-981000 in cluster no-preload-981000
	I0912 15:03:27.936803    4508 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:27.936878    4508 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/no-preload-981000/config.json ...
	I0912 15:03:27.936894    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/no-preload-981000/config.json: {Name:mk53b5aea800e294fb17ddf6142dcbddc7e234fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:27.936902    4508 cache.go:107] acquiring lock: {Name:mkc1a77caa83518e0594a0d738906ba672cfffcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.936932    4508 cache.go:107] acquiring lock: {Name:mk6a55e4453e05fdd3ca36cd3345c91117efcd82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.936969    4508 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 15:03:27.936977    4508 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.542µs
	I0912 15:03:27.936984    4508 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 15:03:27.936995    4508 cache.go:107] acquiring lock: {Name:mk4f461ff3e5eb21e5548ea92d8b919b38cbdf52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.936995    4508 cache.go:107] acquiring lock: {Name:mk323faa6838ac9c63391898cad9ba82e6ba92cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.936934    4508 cache.go:107] acquiring lock: {Name:mkb3565e71ebc4576729e3126454ede880431e4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.937075    4508 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0912 15:03:27.937132    4508 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0912 15:03:27.937154    4508 start.go:365] acquiring machines lock for no-preload-981000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:27.937142    4508 cache.go:107] acquiring lock: {Name:mkfa9ffd9ff51d0e4ac9bce0108963c70b5bd740 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.937216    4508 cache.go:107] acquiring lock: {Name:mkcae99cecb16b717f79a14b58b3a8a64d85d6b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.937228    4508 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 15:03:27.937256    4508 cache.go:107] acquiring lock: {Name:mk3a27396c842f8aca30fbe39877b6b4da29e63d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:27.937264    4508 start.go:369] acquired machines lock for "no-preload-981000" in 96.459µs
	I0912 15:03:27.937270    4508 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0912 15:03:27.937337    4508 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0912 15:03:27.937299    4508 start.go:93] Provisioning new machine with config: &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:27.937378    4508 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:27.945898    4508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:27.937435    4508 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0912 15:03:27.937444    4508 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0912 15:03:27.949315    4508 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0912 15:03:27.950048    4508 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0912 15:03:27.950166    4508 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0912 15:03:27.950181    4508 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0912 15:03:27.950201    4508 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0912 15:03:27.952930    4508 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0912 15:03:27.952939    4508 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0912 15:03:27.962946    4508 start.go:159] libmachine.API.Create for "no-preload-981000" (driver="qemu2")
	I0912 15:03:27.962975    4508 client.go:168] LocalClient.Create starting
	I0912 15:03:27.963046    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:27.963076    4508 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:27.963089    4508 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:27.963142    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:27.963164    4508 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:27.963173    4508 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:27.963547    4508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:28.087243    4508 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:28.134356    4508 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:28.134366    4508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:28.134517    4508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:28.142807    4508 main.go:141] libmachine: STDOUT: 
	I0912 15:03:28.142825    4508 main.go:141] libmachine: STDERR: 
	I0912 15:03:28.142896    4508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2 +20000M
	I0912 15:03:28.150194    4508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:28.150208    4508 main.go:141] libmachine: STDERR: 
	I0912 15:03:28.150232    4508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:28.150241    4508 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:28.150281    4508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:8b:88:20:0b:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:28.151909    4508 main.go:141] libmachine: STDOUT: 
	I0912 15:03:28.151928    4508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:28.151948    4508 client.go:171] LocalClient.Create took 188.971208ms
	I0912 15:03:28.542379    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0912 15:03:28.600190    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0912 15:03:28.769405    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0912 15:03:28.976707    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0912 15:03:29.202120    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0912 15:03:29.330680    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0912 15:03:29.330701    4508 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.393733208s
	I0912 15:03:29.330715    4508 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0912 15:03:29.385464    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0912 15:03:29.606511    4508 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0912 15:03:30.152217    4508 start.go:128] duration metric: createHost completed in 2.214847167s
	I0912 15:03:30.152287    4508 start.go:83] releasing machines lock for "no-preload-981000", held for 2.215056375s
	W0912 15:03:30.152337    4508 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:30.169850    4508 out.go:177] * Deleting "no-preload-981000" in qemu2 ...
	W0912 15:03:30.185230    4508 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:30.185262    4508 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:31.193778    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0912 15:03:31.193829    4508 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.256731709s
	I0912 15:03:31.193858    4508 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0912 15:03:31.548004    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0912 15:03:31.548056    4508 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 3.611042s
	I0912 15:03:31.548146    4508 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0912 15:03:32.156358    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0912 15:03:32.156428    4508 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 4.219642333s
	I0912 15:03:32.156457    4508 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0912 15:03:33.296537    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0912 15:03:33.296585    4508 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 5.359841s
	I0912 15:03:33.296611    4508 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0912 15:03:33.921883    4508 cache.go:157] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0912 15:03:33.921936    4508 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 5.985143875s
	I0912 15:03:33.921975    4508 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0912 15:03:35.185287    4508 start.go:365] acquiring machines lock for no-preload-981000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:35.185697    4508 start.go:369] acquired machines lock for "no-preload-981000" in 311.334µs
	I0912 15:03:35.185839    4508 start.go:93] Provisioning new machine with config: &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:35.186100    4508 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:35.194840    4508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:35.241657    4508 start.go:159] libmachine.API.Create for "no-preload-981000" (driver="qemu2")
	I0912 15:03:35.241724    4508 client.go:168] LocalClient.Create starting
	I0912 15:03:35.241842    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:35.241891    4508 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:35.241913    4508 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:35.241993    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:35.242025    4508 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:35.242046    4508 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:35.242596    4508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:35.367348    4508 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:35.582914    4508 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:35.582925    4508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:35.583076    4508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:35.592194    4508 main.go:141] libmachine: STDOUT: 
	I0912 15:03:35.592209    4508 main.go:141] libmachine: STDERR: 
	I0912 15:03:35.592258    4508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2 +20000M
	I0912 15:03:35.599565    4508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:35.599580    4508 main.go:141] libmachine: STDERR: 
	I0912 15:03:35.599590    4508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:35.599598    4508 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:35.599643    4508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5b:d8:19:ff:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:35.601201    4508 main.go:141] libmachine: STDOUT: 
	I0912 15:03:35.601213    4508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:35.601228    4508 client.go:171] LocalClient.Create took 359.509375ms
	I0912 15:03:37.601542    4508 start.go:128] duration metric: createHost completed in 2.415478s
	I0912 15:03:37.601623    4508 start.go:83] releasing machines lock for "no-preload-981000", held for 2.415973083s
	W0912 15:03:37.601911    4508 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:37.611378    4508 out.go:177] 
	W0912 15:03:37.614424    4508 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:37.614449    4508 out.go:239] * 
	* 
	W0912 15:03:37.617458    4508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:37.627397    4508 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (63.910041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-128000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-128000 create -f testdata/busybox.yaml: exit status 1 (30.528583ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-128000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (28.578666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (27.90125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-128000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-128000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-128000 describe deploy/metrics-server -n kube-system: exit status 1 (26.602334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-128000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (28.555958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.200767834s)

                                                
                                                
-- stdout --
	* [old-k8s-version-128000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-128000 in cluster old-k8s-version-128000
	* Restarting existing qemu2 VM for "old-k8s-version-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:32.913688    4638 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:32.913834    4638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:32.913837    4638 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:32.913839    4638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:32.913981    4638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:32.914927    4638 out.go:303] Setting JSON to false
	I0912 15:03:32.930403    4638 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1986,"bootTime":1694554226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:32.930475    4638 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:32.934384    4638 out.go:177] * [old-k8s-version-128000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:32.944361    4638 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:32.941424    4638 notify.go:220] Checking for updates...
	I0912 15:03:32.952265    4638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:32.959184    4638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:32.966305    4638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:32.973282    4638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:32.981272    4638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:32.985594    4638 config.go:182] Loaded profile config "old-k8s-version-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0912 15:03:32.990350    4638 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0912 15:03:32.994320    4638 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:32.998287    4638 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:03:33.006293    4638 start.go:298] selected driver: qemu2
	I0912 15:03:33.006298    4638 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:33.006363    4638 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:33.008696    4638 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:33.008724    4638 cni.go:84] Creating CNI manager for ""
	I0912 15:03:33.008733    4638 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 15:03:33.008747    4638 start_flags.go:321] config:
	{Name:old-k8s-version-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-128000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:33.013541    4638 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:33.021276    4638 out.go:177] * Starting control plane node old-k8s-version-128000 in cluster old-k8s-version-128000
	I0912 15:03:33.025316    4638 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 15:03:33.025347    4638 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 15:03:33.025357    4638 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:33.025444    4638 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:33.025450    4638 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 15:03:33.025546    4638 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/old-k8s-version-128000/config.json ...
	I0912 15:03:33.025810    4638 start.go:365] acquiring machines lock for old-k8s-version-128000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:33.025843    4638 start.go:369] acquired machines lock for "old-k8s-version-128000" in 26.791µs
	I0912 15:03:33.025853    4638 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:33.025860    4638 fix.go:54] fixHost starting: 
	I0912 15:03:33.025990    4638 fix.go:102] recreateIfNeeded on old-k8s-version-128000: state=Stopped err=<nil>
	W0912 15:03:33.026000    4638 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:33.030298    4638 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-128000" ...
	I0912 15:03:33.038288    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a7:bf:48:c3:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:33.040176    4638 main.go:141] libmachine: STDOUT: 
	I0912 15:03:33.040194    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:33.040226    4638 fix.go:56] fixHost completed within 14.36675ms
	I0912 15:03:33.040231    4638 start.go:83] releasing machines lock for "old-k8s-version-128000", held for 14.383ms
	W0912 15:03:33.040238    4638 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:33.040289    4638 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:33.040294    4638 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:38.042163    4638 start.go:365] acquiring machines lock for old-k8s-version-128000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:38.042207    4638 start.go:369] acquired machines lock for "old-k8s-version-128000" in 32.417µs
	I0912 15:03:38.042217    4638 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:38.042222    4638 fix.go:54] fixHost starting: 
	I0912 15:03:38.042335    4638 fix.go:102] recreateIfNeeded on old-k8s-version-128000: state=Stopped err=<nil>
	W0912 15:03:38.042340    4638 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:38.046808    4638 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-128000" ...
	I0912 15:03:38.054809    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a7:bf:48:c3:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/old-k8s-version-128000/disk.qcow2
	I0912 15:03:38.056707    4638 main.go:141] libmachine: STDOUT: 
	I0912 15:03:38.056721    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:38.056740    4638 fix.go:56] fixHost completed within 14.519125ms
	I0912 15:03:38.056746    4638 start.go:83] releasing machines lock for "old-k8s-version-128000", held for 14.535667ms
	W0912 15:03:38.056793    4638 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:38.060828    4638 out.go:177] 
	W0912 15:03:38.067865    4638 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:38.067870    4638 out.go:239] * 
	* 
	W0912 15:03:38.068393    4638 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:38.082803    4638 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-128000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (33.093959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-981000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-981000 create -f testdata/busybox.yaml: exit status 1 (29.463667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-981000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.47575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.534625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-981000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system: exit status 1 (26.194375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.967333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.195768666s)

                                                
                                                
-- stdout --
	* [no-preload-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-981000 in cluster no-preload-981000
	* Restarting existing qemu2 VM for "no-preload-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:38.122103    4669 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:38.122231    4669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.122233    4669 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:38.122236    4669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.122380    4669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:38.123509    4669 out.go:303] Setting JSON to false
	I0912 15:03:38.139754    4669 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1992,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:38.139832    4669 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:38.143803    4669 out.go:177] * [no-preload-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:38.156786    4669 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:38.151087    4669 notify.go:220] Checking for updates...
	I0912 15:03:38.164793    4669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:38.167864    4669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:38.170802    4669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:38.173819    4669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:38.176762    4669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:38.180140    4669 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:38.180405    4669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:38.184813    4669 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:03:38.191753    4669 start.go:298] selected driver: qemu2
	I0912 15:03:38.191761    4669 start.go:902] validating driver "qemu2" against &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:38.191826    4669 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:38.193811    4669 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:38.193845    4669 cni.go:84] Creating CNI manager for ""
	I0912 15:03:38.193851    4669 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:38.193856    4669 start_flags.go:321] config:
	{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-981000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:38.198219    4669 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.202809    4669 out.go:177] * Starting control plane node no-preload-981000 in cluster no-preload-981000
	I0912 15:03:38.210855    4669 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:38.211011    4669 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/no-preload-981000/config.json ...
	I0912 15:03:38.211034    4669 cache.go:107] acquiring lock: {Name:mkc1a77caa83518e0594a0d738906ba672cfffcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211102    4669 cache.go:107] acquiring lock: {Name:mk6a55e4453e05fdd3ca36cd3345c91117efcd82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211168    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0912 15:03:38.211160    4669 cache.go:107] acquiring lock: {Name:mkfa9ffd9ff51d0e4ac9bce0108963c70b5bd740 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211173    4669 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 88.833µs
	I0912 15:03:38.211179    4669 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0912 15:03:38.211186    4669 cache.go:107] acquiring lock: {Name:mkcae99cecb16b717f79a14b58b3a8a64d85d6b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211177    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 15:03:38.211204    4669 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 173.958µs
	I0912 15:03:38.211217    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0912 15:03:38.211217    4669 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 15:03:38.211222    4669 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 37.25µs
	I0912 15:03:38.211100    4669 cache.go:107] acquiring lock: {Name:mkb3565e71ebc4576729e3126454ede880431e4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211238    4669 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0912 15:03:38.211251    4669 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0912 15:03:38.211282    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0912 15:03:38.211287    4669 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 203.166µs
	I0912 15:03:38.211290    4669 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0912 15:03:38.211287    4669 cache.go:107] acquiring lock: {Name:mk323faa6838ac9c63391898cad9ba82e6ba92cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211325    4669 cache.go:107] acquiring lock: {Name:mk4f461ff3e5eb21e5548ea92d8b919b38cbdf52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211348    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0912 15:03:38.211355    4669 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 92.208µs
	I0912 15:03:38.211360    4669 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0912 15:03:38.211359    4669 start.go:365] acquiring machines lock for no-preload-981000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:38.211379    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0912 15:03:38.211386    4669 start.go:369] acquired machines lock for "no-preload-981000" in 22.333µs
	I0912 15:03:38.211385    4669 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 126.792µs
	I0912 15:03:38.211392    4669 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0912 15:03:38.211396    4669 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:38.211401    4669 fix.go:54] fixHost starting: 
	I0912 15:03:38.211427    4669 cache.go:107] acquiring lock: {Name:mk3a27396c842f8aca30fbe39877b6b4da29e63d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.211477    4669 cache.go:115] /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0912 15:03:38.211485    4669 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 110.416µs
	I0912 15:03:38.211491    4669 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0912 15:03:38.211516    4669 fix.go:102] recreateIfNeeded on no-preload-981000: state=Stopped err=<nil>
	W0912 15:03:38.211524    4669 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:38.217792    4669 out.go:177] * Restarting existing qemu2 VM for "no-preload-981000" ...
	I0912 15:03:38.221822    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5b:d8:19:ff:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:38.222450    4669 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0912 15:03:38.223683    4669 main.go:141] libmachine: STDOUT: 
	I0912 15:03:38.223702    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:38.223731    4669 fix.go:56] fixHost completed within 12.328375ms
	I0912 15:03:38.223744    4669 start.go:83] releasing machines lock for "no-preload-981000", held for 12.355334ms
	W0912 15:03:38.223751    4669 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:38.223808    4669 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:38.223812    4669 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:38.763043    4669 cache.go:162] opening:  /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0912 15:03:43.224643    4669 start.go:365] acquiring machines lock for no-preload-981000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:43.225099    4669 start.go:369] acquired machines lock for "no-preload-981000" in 377.917µs
	I0912 15:03:43.225241    4669 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:43.225264    4669 fix.go:54] fixHost starting: 
	I0912 15:03:43.225944    4669 fix.go:102] recreateIfNeeded on no-preload-981000: state=Stopped err=<nil>
	W0912 15:03:43.225971    4669 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:43.237660    4669 out.go:177] * Restarting existing qemu2 VM for "no-preload-981000" ...
	I0912 15:03:43.241636    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5b:d8:19:ff:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/no-preload-981000/disk.qcow2
	I0912 15:03:43.251669    4669 main.go:141] libmachine: STDOUT: 
	I0912 15:03:43.251725    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:43.251808    4669 fix.go:56] fixHost completed within 26.545917ms
	I0912 15:03:43.251830    4669 start.go:83] releasing machines lock for "no-preload-981000", held for 26.707917ms
	W0912 15:03:43.252101    4669 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:43.259560    4669 out.go:177] 
	W0912 15:03:43.263556    4669 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:43.263600    4669 out.go:239] * 
	* 
	W0912 15:03:43.266454    4669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:43.275462    4669 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (68.993959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-128000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (35.047666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-128000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.943125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (34.499625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-128000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-128000 "sudo crictl images -o json": exit status 89 (41.495334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-128000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-128000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-128000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (30.236917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-128000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-128000 --alsologtostderr -v=1: exit status 89 (42.219292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-128000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:38.318095    4691 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:38.318431    4691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.318435    4691 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:38.318438    4691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.318597    4691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:38.318833    4691 out.go:303] Setting JSON to false
	I0912 15:03:38.318846    4691 mustload.go:65] Loading cluster: old-k8s-version-128000
	I0912 15:03:38.319037    4691 config.go:182] Loaded profile config "old-k8s-version-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0912 15:03:38.323818    4691 out.go:177] * The control plane node must be running for this command
	I0912 15:03:38.327925    4691 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-128000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-128000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (29.581417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (30.168625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (10.079780792s)

                                                
                                                
-- stdout --
	* [embed-certs-280000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-280000 in cluster embed-certs-280000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-280000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:38.785374    4723 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:38.785521    4723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.785524    4723 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:38.785527    4723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:38.785661    4723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:38.786771    4723 out.go:303] Setting JSON to false
	I0912 15:03:38.802321    4723 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1992,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:38.802402    4723 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:38.810497    4723 out.go:177] * [embed-certs-280000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:38.814520    4723 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:38.818387    4723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:38.814594    4723 notify.go:220] Checking for updates...
	I0912 15:03:38.825447    4723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:38.828425    4723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:38.835235    4723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:38.839446    4723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:38.842882    4723 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:38.842956    4723 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:38.842999    4723 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:38.847402    4723 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:38.854445    4723 start.go:298] selected driver: qemu2
	I0912 15:03:38.854451    4723 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:38.854457    4723 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:38.856623    4723 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:38.860436    4723 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:38.863481    4723 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:38.863499    4723 cni.go:84] Creating CNI manager for ""
	I0912 15:03:38.863506    4723 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:38.863510    4723 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:03:38.863516    4723 start_flags.go:321] config:
	{Name:embed-certs-280000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-280000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:38.867976    4723 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:38.869405    4723 out.go:177] * Starting control plane node embed-certs-280000 in cluster embed-certs-280000
	I0912 15:03:38.877462    4723 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:38.877485    4723 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:38.877506    4723 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:38.877572    4723 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:38.877585    4723 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:38.877652    4723 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/embed-certs-280000/config.json ...
	I0912 15:03:38.877670    4723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/embed-certs-280000/config.json: {Name:mk6946a6c0bee8451ed09650a6c3a18d96406119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:38.877882    4723 start.go:365] acquiring machines lock for embed-certs-280000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:38.877914    4723 start.go:369] acquired machines lock for "embed-certs-280000" in 25.417µs
	I0912 15:03:38.877926    4723 start.go:93] Provisioning new machine with config: &{Name:embed-certs-280000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-280000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:38.877960    4723 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:38.886538    4723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:38.903153    4723 start.go:159] libmachine.API.Create for "embed-certs-280000" (driver="qemu2")
	I0912 15:03:38.903181    4723 client.go:168] LocalClient.Create starting
	I0912 15:03:38.903241    4723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:38.903266    4723 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:38.903276    4723 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:38.903319    4723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:38.903339    4723 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:38.903347    4723 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:38.903661    4723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:39.019632    4723 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:39.062531    4723 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:39.062536    4723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:39.062658    4723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:39.071471    4723 main.go:141] libmachine: STDOUT: 
	I0912 15:03:39.071483    4723 main.go:141] libmachine: STDERR: 
	I0912 15:03:39.071545    4723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2 +20000M
	I0912 15:03:39.078715    4723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:39.078730    4723 main.go:141] libmachine: STDERR: 
	I0912 15:03:39.078744    4723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:39.078751    4723 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:39.078797    4723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:df:94:d8:1f:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:39.080332    4723 main.go:141] libmachine: STDOUT: 
	I0912 15:03:39.080349    4723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:39.080369    4723 client.go:171] LocalClient.Create took 177.186875ms
	I0912 15:03:41.082521    4723 start.go:128] duration metric: createHost completed in 2.204591542s
	I0912 15:03:41.082604    4723 start.go:83] releasing machines lock for "embed-certs-280000", held for 2.20474875s
	W0912 15:03:41.082677    4723 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:41.090106    4723 out.go:177] * Deleting "embed-certs-280000" in qemu2 ...
	W0912 15:03:41.110168    4723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:41.110240    4723 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:46.112328    4723 start.go:365] acquiring machines lock for embed-certs-280000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:46.548061    4723 start.go:369] acquired machines lock for "embed-certs-280000" in 435.604667ms
	I0912 15:03:46.548170    4723 start.go:93] Provisioning new machine with config: &{Name:embed-certs-280000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-280000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:46.548469    4723 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:46.557157    4723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:46.603426    4723 start.go:159] libmachine.API.Create for "embed-certs-280000" (driver="qemu2")
	I0912 15:03:46.603475    4723 client.go:168] LocalClient.Create starting
	I0912 15:03:46.603593    4723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:46.603651    4723 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:46.603670    4723 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:46.603756    4723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:46.603793    4723 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:46.603808    4723 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:46.604259    4723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:46.730678    4723 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:46.779465    4723 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:46.779470    4723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:46.779613    4723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:46.788041    4723 main.go:141] libmachine: STDOUT: 
	I0912 15:03:46.788056    4723 main.go:141] libmachine: STDERR: 
	I0912 15:03:46.788104    4723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2 +20000M
	I0912 15:03:46.795241    4723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:46.795258    4723 main.go:141] libmachine: STDERR: 
	I0912 15:03:46.795290    4723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:46.795304    4723 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:46.795339    4723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:57:d8:f8:9f:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:46.796889    4723 main.go:141] libmachine: STDOUT: 
	I0912 15:03:46.796902    4723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:46.796915    4723 client.go:171] LocalClient.Create took 193.434875ms
	I0912 15:03:48.797830    4723 start.go:128] duration metric: createHost completed in 2.249387333s
	I0912 15:03:48.797881    4723 start.go:83] releasing machines lock for "embed-certs-280000", held for 2.249835666s
	W0912 15:03:48.798222    4723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-280000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-280000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:48.810326    4723 out.go:177] 
	W0912 15:03:48.813813    4723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:48.813838    4723 out.go:239] * 
	* 
	W0912 15:03:48.815895    4723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:48.825679    4723 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (62.927417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-981000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (31.978125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-981000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.119375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.789208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-981000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-981000 "sudo crictl images -o json": exit status 89 (43.451584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-981000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-981000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-981000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.672541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1: exit status 89 (39.893375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-981000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:43.549150    4745 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:43.549359    4745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:43.549362    4745 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:43.549364    4745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:43.549492    4745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:43.549743    4745 out.go:303] Setting JSON to false
	I0912 15:03:43.549752    4745 mustload.go:65] Loading cluster: no-preload-981000
	I0912 15:03:43.549951    4745 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:43.552939    4745 out.go:177] * The control plane node must be running for this command
	I0912 15:03:43.557100    4745 out.go:177]   To start a cluster, run: "minikube start -p no-preload-981000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (29.465917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.323208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.696733291s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-803000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-803000 in cluster default-k8s-diff-port-803000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-803000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:44.240201    4780 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:44.240323    4780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:44.240326    4780 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:44.240328    4780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:44.240467    4780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:44.241501    4780 out.go:303] Setting JSON to false
	I0912 15:03:44.256693    4780 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1998,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:44.256771    4780 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:44.260996    4780 out.go:177] * [default-k8s-diff-port-803000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:44.263923    4780 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:44.267957    4780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:44.263968    4780 notify.go:220] Checking for updates...
	I0912 15:03:44.271995    4780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:44.274949    4780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:44.277893    4780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:44.280897    4780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:44.284216    4780 config.go:182] Loaded profile config "embed-certs-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:44.284274    4780 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:44.284312    4780 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:44.287836    4780 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:44.294860    4780 start.go:298] selected driver: qemu2
	I0912 15:03:44.294863    4780 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:44.294869    4780 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:44.296744    4780 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 15:03:44.299887    4780 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:44.303016    4780 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:44.303053    4780 cni.go:84] Creating CNI manager for ""
	I0912 15:03:44.303060    4780 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:44.303064    4780 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:03:44.303071    4780 start_flags.go:321] config:
	{Name:default-k8s-diff-port-803000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-803000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:44.307298    4780 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:44.313891    4780 out.go:177] * Starting control plane node default-k8s-diff-port-803000 in cluster default-k8s-diff-port-803000
	I0912 15:03:44.317894    4780 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:44.317916    4780 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:44.317926    4780 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:44.318021    4780 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:44.318027    4780 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:44.318091    4780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/default-k8s-diff-port-803000/config.json ...
	I0912 15:03:44.318104    4780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/default-k8s-diff-port-803000/config.json: {Name:mk12850542cba3ff5db722b6cbea7c88e8917bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:44.318318    4780 start.go:365] acquiring machines lock for default-k8s-diff-port-803000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:44.318354    4780 start.go:369] acquired machines lock for "default-k8s-diff-port-803000" in 26.083µs
	I0912 15:03:44.318368    4780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-803000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:44.318410    4780 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:44.322904    4780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:44.338915    4780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-803000" (driver="qemu2")
	I0912 15:03:44.338932    4780 client.go:168] LocalClient.Create starting
	I0912 15:03:44.338994    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:44.339019    4780 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:44.339031    4780 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:44.339073    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:44.339093    4780 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:44.339103    4780 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:44.339415    4780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:44.456455    4780 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:44.528089    4780 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:44.528095    4780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:44.528228    4780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:44.536777    4780 main.go:141] libmachine: STDOUT: 
	I0912 15:03:44.536791    4780 main.go:141] libmachine: STDERR: 
	I0912 15:03:44.536847    4780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2 +20000M
	I0912 15:03:44.544019    4780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:44.544032    4780 main.go:141] libmachine: STDERR: 
	I0912 15:03:44.544047    4780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:44.544054    4780 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:44.544085    4780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:61:1a:84:f2:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:44.545637    4780 main.go:141] libmachine: STDOUT: 
	I0912 15:03:44.545664    4780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:44.545680    4780 client.go:171] LocalClient.Create took 206.748375ms
	I0912 15:03:46.547811    4780 start.go:128] duration metric: createHost completed in 2.229443208s
	I0912 15:03:46.547884    4780 start.go:83] releasing machines lock for "default-k8s-diff-port-803000", held for 2.229581792s
	W0912 15:03:46.547945    4780 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:46.566158    4780 out.go:177] * Deleting "default-k8s-diff-port-803000" in qemu2 ...
	W0912 15:03:46.583797    4780 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:46.583826    4780 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:51.586025    4780 start.go:365] acquiring machines lock for default-k8s-diff-port-803000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:51.586445    4780 start.go:369] acquired machines lock for "default-k8s-diff-port-803000" in 316.083µs
	I0912 15:03:51.586556    4780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-803000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:51.586835    4780 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:51.595215    4780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:51.642249    4780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-803000" (driver="qemu2")
	I0912 15:03:51.642296    4780 client.go:168] LocalClient.Create starting
	I0912 15:03:51.642417    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:51.642479    4780 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:51.642502    4780 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:51.642566    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:51.642599    4780 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:51.642614    4780 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:51.643160    4780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:51.771226    4780 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:51.849654    4780 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:51.849667    4780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:51.849803    4780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:51.858235    4780 main.go:141] libmachine: STDOUT: 
	I0912 15:03:51.858250    4780 main.go:141] libmachine: STDERR: 
	I0912 15:03:51.858300    4780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2 +20000M
	I0912 15:03:51.865614    4780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:51.865626    4780 main.go:141] libmachine: STDERR: 
	I0912 15:03:51.865639    4780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:51.865647    4780 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:51.865691    4780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:41:79:5c:cb:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:51.867271    4780 main.go:141] libmachine: STDOUT: 
	I0912 15:03:51.867286    4780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:51.867299    4780 client.go:171] LocalClient.Create took 225.001333ms
	I0912 15:03:53.869424    4780 start.go:128] duration metric: createHost completed in 2.282621209s
	I0912 15:03:53.869494    4780 start.go:83] releasing machines lock for "default-k8s-diff-port-803000", held for 2.283082875s
	W0912 15:03:53.869923    4780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:53.880590    4780 out.go:177] 
	W0912 15:03:53.884636    4780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:53.884679    4780 out.go:239] * 
	* 
	W0912 15:03:53.887496    4780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:53.897598    4780 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (68.166667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-280000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-280000 create -f testdata/busybox.yaml: exit status 1 (30.159208ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-280000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (29.024375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.782584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-280000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-280000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-280000 describe deploy/metrics-server -n kube-system: exit status 1 (26.222584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-280000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-280000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.8215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.180213167s)

                                                
                                                
-- stdout --
	* [embed-certs-280000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-280000 in cluster embed-certs-280000
	* Restarting existing qemu2 VM for "embed-certs-280000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-280000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:49.282966    4812 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:49.283083    4812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:49.283086    4812 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:49.283088    4812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:49.283217    4812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:49.284181    4812 out.go:303] Setting JSON to false
	I0912 15:03:49.299523    4812 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2003,"bootTime":1694554226,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:49.299603    4812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:49.304538    4812 out.go:177] * [embed-certs-280000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:49.311343    4812 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:49.315477    4812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:49.311411    4812 notify.go:220] Checking for updates...
	I0912 15:03:49.323483    4812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:49.326589    4812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:49.329538    4812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:49.332508    4812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:49.335757    4812 config.go:182] Loaded profile config "embed-certs-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:49.336019    4812 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:49.340520    4812 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:03:49.347504    4812 start.go:298] selected driver: qemu2
	I0912 15:03:49.347508    4812 start.go:902] validating driver "qemu2" against &{Name:embed-certs-280000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-280000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:49.347574    4812 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:49.349602    4812 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:49.349629    4812 cni.go:84] Creating CNI manager for ""
	I0912 15:03:49.349636    4812 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:49.349640    4812 start_flags.go:321] config:
	{Name:embed-certs-280000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-280000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:49.353661    4812 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:49.361536    4812 out.go:177] * Starting control plane node embed-certs-280000 in cluster embed-certs-280000
	I0912 15:03:49.365518    4812 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:49.365547    4812 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:49.365565    4812 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:49.365633    4812 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:49.365645    4812 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:49.365713    4812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/embed-certs-280000/config.json ...
	I0912 15:03:49.366035    4812 start.go:365] acquiring machines lock for embed-certs-280000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:49.366064    4812 start.go:369] acquired machines lock for "embed-certs-280000" in 23.459µs
	I0912 15:03:49.366073    4812 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:49.366079    4812 fix.go:54] fixHost starting: 
	I0912 15:03:49.366188    4812 fix.go:102] recreateIfNeeded on embed-certs-280000: state=Stopped err=<nil>
	W0912 15:03:49.366196    4812 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:49.370504    4812 out.go:177] * Restarting existing qemu2 VM for "embed-certs-280000" ...
	I0912 15:03:49.378319    4812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:57:d8:f8:9f:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:49.380053    4812 main.go:141] libmachine: STDOUT: 
	I0912 15:03:49.380068    4812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:49.380092    4812 fix.go:56] fixHost completed within 14.012875ms
	I0912 15:03:49.380096    4812 start.go:83] releasing machines lock for "embed-certs-280000", held for 14.028791ms
	W0912 15:03:49.380102    4812 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:49.380138    4812 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:49.380142    4812 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:54.382120    4812 start.go:365] acquiring machines lock for embed-certs-280000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:54.382206    4812 start.go:369] acquired machines lock for "embed-certs-280000" in 68.041µs
	I0912 15:03:54.382219    4812 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:54.382223    4812 fix.go:54] fixHost starting: 
	I0912 15:03:54.382348    4812 fix.go:102] recreateIfNeeded on embed-certs-280000: state=Stopped err=<nil>
	W0912 15:03:54.382354    4812 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:54.390516    4812 out.go:177] * Restarting existing qemu2 VM for "embed-certs-280000" ...
	I0912 15:03:54.398524    4812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:57:d8:f8:9f:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/embed-certs-280000/disk.qcow2
	I0912 15:03:54.400342    4812 main.go:141] libmachine: STDOUT: 
	I0912 15:03:54.400354    4812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:54.400370    4812 fix.go:56] fixHost completed within 18.148041ms
	I0912 15:03:54.400374    4812 start.go:83] releasing machines lock for "embed-certs-280000", held for 18.164583ms
	W0912 15:03:54.400436    4812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-280000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-280000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:54.409522    4812 out.go:177] 
	W0912 15:03:54.412578    4812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:54.412582    4812 out.go:239] * 
	* 
	W0912 15:03:54.413069    4812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:54.424314    4812 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-280000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (35.563334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-803000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-803000 create -f testdata/busybox.yaml: exit status 1 (30.550875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-803000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (28.619208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (28.4265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-803000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-803000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-803000 describe deploy/metrics-server -n kube-system: exit status 1 (26.26175ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-803000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-803000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (28.827709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.224670458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-803000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-803000 in cluster default-k8s-diff-port-803000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:54.360304    4844 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:54.360426    4844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:54.360429    4844 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:54.360431    4844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:54.360561    4844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:54.361583    4844 out.go:303] Setting JSON to false
	I0912 15:03:54.376613    4844 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2008,"bootTime":1694554226,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:54.376671    4844 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:54.380600    4844 out.go:177] * [default-k8s-diff-port-803000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:54.390516    4844 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:54.387616    4844 notify.go:220] Checking for updates...
	I0912 15:03:54.401571    4844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:54.412560    4844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:54.424312    4844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:54.436459    4844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:54.443550    4844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:54.447824    4844 config.go:182] Loaded profile config "default-k8s-diff-port-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:54.448113    4844 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:54.452483    4844 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:03:54.459449    4844 start.go:298] selected driver: qemu2
	I0912 15:03:54.459458    4844 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-803000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:54.459521    4844 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:54.461784    4844 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 15:03:54.461814    4844 cni.go:84] Creating CNI manager for ""
	I0912 15:03:54.461828    4844 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:54.461833    4844 start_flags.go:321] config:
	{Name:default-k8s-diff-port-803000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-8030
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:54.465549    4844 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:54.472469    4844 out.go:177] * Starting control plane node default-k8s-diff-port-803000 in cluster default-k8s-diff-port-803000
	I0912 15:03:54.476513    4844 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:54.476548    4844 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:54.476565    4844 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:54.476644    4844 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:54.476649    4844 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:54.476716    4844 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/default-k8s-diff-port-803000/config.json ...
	I0912 15:03:54.476965    4844 start.go:365] acquiring machines lock for default-k8s-diff-port-803000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:54.476992    4844 start.go:369] acquired machines lock for "default-k8s-diff-port-803000" in 18.917µs
	I0912 15:03:54.477001    4844 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:54.477006    4844 fix.go:54] fixHost starting: 
	I0912 15:03:54.477128    4844 fix.go:102] recreateIfNeeded on default-k8s-diff-port-803000: state=Stopped err=<nil>
	W0912 15:03:54.477136    4844 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:54.484473    4844 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-803000" ...
	I0912 15:03:54.487455    4844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:41:79:5c:cb:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:54.489384    4844 main.go:141] libmachine: STDOUT: 
	I0912 15:03:54.489404    4844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:54.489435    4844 fix.go:56] fixHost completed within 12.427625ms
	I0912 15:03:54.489441    4844 start.go:83] releasing machines lock for "default-k8s-diff-port-803000", held for 12.444458ms
	W0912 15:03:54.489447    4844 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:54.489499    4844 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:54.489505    4844 start.go:703] Will try again in 5 seconds ...
	I0912 15:03:59.491615    4844 start.go:365] acquiring machines lock for default-k8s-diff-port-803000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:59.492130    4844 start.go:369] acquired machines lock for "default-k8s-diff-port-803000" in 419.625µs
	I0912 15:03:59.492337    4844 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:03:59.492357    4844 fix.go:54] fixHost starting: 
	I0912 15:03:59.493006    4844 fix.go:102] recreateIfNeeded on default-k8s-diff-port-803000: state=Stopped err=<nil>
	W0912 15:03:59.493031    4844 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:03:59.503984    4844 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-803000" ...
	I0912 15:03:59.508060    4844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:41:79:5c:cb:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/default-k8s-diff-port-803000/disk.qcow2
	I0912 15:03:59.517506    4844 main.go:141] libmachine: STDOUT: 
	I0912 15:03:59.517584    4844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:59.517701    4844 fix.go:56] fixHost completed within 25.331084ms
	I0912 15:03:59.517749    4844 start.go:83] releasing machines lock for "default-k8s-diff-port-803000", held for 25.568916ms
	W0912 15:03:59.518046    4844 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-803000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-803000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:59.526221    4844 out.go:177] 
	W0912 15:03:59.532976    4844 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:03:59.533005    4844 out.go:239] * 
	* 
	W0912 15:03:59.535905    4844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:03:59.545772    4844 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-803000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (67.324375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-280000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.118541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-280000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-280000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-280000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.284083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-280000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-280000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (29.054334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-280000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-280000 "sudo crictl images -o json": exit status 89 (38.545125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-280000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-280000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-280000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.696875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-280000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-280000 --alsologtostderr -v=1: exit status 89 (41.546333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-280000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:54.652437    4863 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:54.652590    4863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:54.652593    4863 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:54.652596    4863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:54.652742    4863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:54.652954    4863 out.go:303] Setting JSON to false
	I0912 15:03:54.652965    4863 mustload.go:65] Loading cluster: embed-certs-280000
	I0912 15:03:54.653147    4863 config.go:182] Loaded profile config "embed-certs-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:54.657885    4863 out.go:177] * The control plane node must be running for this command
	I0912 15:03:54.662091    4863 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-280000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-280000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.695208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (29.13275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (10.074907084s)

                                                
                                                
-- stdout --
	* [newest-cni-091000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-091000 in cluster newest-cni-091000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-091000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:55.102489    4886 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:55.102875    4886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:55.102880    4886 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:55.102883    4886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:55.103093    4886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:55.104514    4886 out.go:303] Setting JSON to false
	I0912 15:03:55.119897    4886 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2009,"bootTime":1694554226,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:03:55.119966    4886 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:03:55.125057    4886 out.go:177] * [newest-cni-091000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:03:55.133064    4886 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:03:55.137073    4886 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:03:55.133107    4886 notify.go:220] Checking for updates...
	I0912 15:03:55.145009    4886 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:03:55.148107    4886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:03:55.151079    4886 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:03:55.154075    4886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:03:55.157524    4886 config.go:182] Loaded profile config "default-k8s-diff-port-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:55.157588    4886 config.go:182] Loaded profile config "multinode-914000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:55.157642    4886 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:03:55.162041    4886 out.go:177] * Using the qemu2 driver based on user configuration
	I0912 15:03:55.169116    4886 start.go:298] selected driver: qemu2
	I0912 15:03:55.169121    4886 start.go:902] validating driver "qemu2" against <nil>
	I0912 15:03:55.169127    4886 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:03:55.171122    4886 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0912 15:03:55.171145    4886 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0912 15:03:55.178046    4886 out.go:177] * Automatically selected the socket_vmnet network
	I0912 15:03:55.181167    4886 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0912 15:03:55.181198    4886 cni.go:84] Creating CNI manager for ""
	I0912 15:03:55.181206    4886 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:03:55.181212    4886 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 15:03:55.181219    4886 start_flags.go:321] config:
	{Name:newest-cni-091000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-091000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:03:55.185830    4886 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:03:55.194124    4886 out.go:177] * Starting control plane node newest-cni-091000 in cluster newest-cni-091000
	I0912 15:03:55.197896    4886 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:03:55.197919    4886 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:03:55.197933    4886 cache.go:57] Caching tarball of preloaded images
	I0912 15:03:55.198012    4886 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:03:55.198019    4886 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:03:55.198099    4886 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/newest-cni-091000/config.json ...
	I0912 15:03:55.198113    4886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/newest-cni-091000/config.json: {Name:mk8a5d8ce7069b5dca65b69d5ce1bc60ef7b8cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 15:03:55.198386    4886 start.go:365] acquiring machines lock for newest-cni-091000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:03:55.198420    4886 start.go:369] acquired machines lock for "newest-cni-091000" in 27.791µs
	I0912 15:03:55.198433    4886 start.go:93] Provisioning new machine with config: &{Name:newest-cni-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-091000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:03:55.198481    4886 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:03:55.205965    4886 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:03:55.223540    4886 start.go:159] libmachine.API.Create for "newest-cni-091000" (driver="qemu2")
	I0912 15:03:55.223564    4886 client.go:168] LocalClient.Create starting
	I0912 15:03:55.223621    4886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:03:55.223654    4886 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:55.223673    4886 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:55.223721    4886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:03:55.223741    4886 main.go:141] libmachine: Decoding PEM data...
	I0912 15:03:55.223753    4886 main.go:141] libmachine: Parsing certificate...
	I0912 15:03:55.224129    4886 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:03:55.339599    4886 main.go:141] libmachine: Creating SSH key...
	I0912 15:03:55.493550    4886 main.go:141] libmachine: Creating Disk image...
	I0912 15:03:55.493556    4886 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:03:55.493724    4886 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:03:55.502762    4886 main.go:141] libmachine: STDOUT: 
	I0912 15:03:55.502774    4886 main.go:141] libmachine: STDERR: 
	I0912 15:03:55.502834    4886 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2 +20000M
	I0912 15:03:55.509900    4886 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:03:55.509915    4886 main.go:141] libmachine: STDERR: 
	I0912 15:03:55.509932    4886 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:03:55.509944    4886 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:03:55.509977    4886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:af:6d:34:55:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:03:55.511442    4886 main.go:141] libmachine: STDOUT: 
	I0912 15:03:55.511455    4886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:03:55.511474    4886 client.go:171] LocalClient.Create took 287.913583ms
	I0912 15:03:57.513602    4886 start.go:128] duration metric: createHost completed in 2.315153542s
	I0912 15:03:57.513665    4886 start.go:83] releasing machines lock for "newest-cni-091000", held for 2.315290625s
	W0912 15:03:57.513728    4886 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:57.522111    4886 out.go:177] * Deleting "newest-cni-091000" in qemu2 ...
	W0912 15:03:57.542288    4886 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:03:57.542317    4886 start.go:703] Will try again in 5 seconds ...
	I0912 15:04:02.544445    4886 start.go:365] acquiring machines lock for newest-cni-091000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:04:02.544916    4886 start.go:369] acquired machines lock for "newest-cni-091000" in 334.125µs
	I0912 15:04:02.545078    4886 start.go:93] Provisioning new machine with config: &{Name:newest-cni-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-091000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 15:04:02.545437    4886 start.go:125] createHost starting for "" (driver="qemu2")
	I0912 15:04:02.548439    4886 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0912 15:04:02.593160    4886 start.go:159] libmachine.API.Create for "newest-cni-091000" (driver="qemu2")
	I0912 15:04:02.593218    4886 client.go:168] LocalClient.Create starting
	I0912 15:04:02.593365    4886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/ca.pem
	I0912 15:04:02.593431    4886 main.go:141] libmachine: Decoding PEM data...
	I0912 15:04:02.593452    4886 main.go:141] libmachine: Parsing certificate...
	I0912 15:04:02.593526    4886 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17194-1051/.minikube/certs/cert.pem
	I0912 15:04:02.593563    4886 main.go:141] libmachine: Decoding PEM data...
	I0912 15:04:02.593578    4886 main.go:141] libmachine: Parsing certificate...
	I0912 15:04:02.594079    4886 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso...
	I0912 15:04:02.720911    4886 main.go:141] libmachine: Creating SSH key...
	I0912 15:04:03.087688    4886 main.go:141] libmachine: Creating Disk image...
	I0912 15:04:03.087702    4886 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0912 15:04:03.087908    4886 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2.raw /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:04:03.097095    4886 main.go:141] libmachine: STDOUT: 
	I0912 15:04:03.097108    4886 main.go:141] libmachine: STDERR: 
	I0912 15:04:03.097170    4886 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2 +20000M
	I0912 15:04:03.104387    4886 main.go:141] libmachine: STDOUT: Image resized.
	
	I0912 15:04:03.104416    4886 main.go:141] libmachine: STDERR: 
	I0912 15:04:03.104428    4886 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:04:03.104438    4886 main.go:141] libmachine: Starting QEMU VM...
	I0912 15:04:03.104482    4886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:fd:d4:cf:b6:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:04:03.106037    4886 main.go:141] libmachine: STDOUT: 
	I0912 15:04:03.106048    4886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:04:03.106063    4886 client.go:171] LocalClient.Create took 512.849958ms
	I0912 15:04:05.106697    4886 start.go:128] duration metric: createHost completed in 2.561239333s
	I0912 15:04:05.109513    4886 start.go:83] releasing machines lock for "newest-cni-091000", held for 2.564624958s
	W0912 15:04:05.109837    4886 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-091000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-091000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:04:05.117429    4886 out.go:177] 
	W0912 15:04:05.121444    4886 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:04:05.121470    4886 out.go:239] * 
	* 
	W0912 15:04:05.124185    4886 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:04:05.141414    4886 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (66.367167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-091000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-803000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (32.837167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-803000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-803000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-803000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.863583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-803000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-803000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (29.404459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-803000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-803000 "sudo crictl images -o json": exit status 89 (42.164416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-803000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-803000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-803000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (28.973834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-803000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-803000 --alsologtostderr -v=1: exit status 89 (40.645125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-803000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:03:59.815639    4908 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:03:59.815791    4908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:59.815794    4908 out.go:309] Setting ErrFile to fd 2...
	I0912 15:03:59.815797    4908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:03:59.815925    4908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:03:59.816139    4908 out.go:303] Setting JSON to false
	I0912 15:03:59.816150    4908 mustload.go:65] Loading cluster: default-k8s-diff-port-803000
	I0912 15:03:59.816340    4908 config.go:182] Loaded profile config "default-k8s-diff-port-803000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:03:59.820824    4908 out.go:177] * The control plane node must be running for this command
	I0912 15:03:59.824931    4908 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-803000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-803000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (29.125542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (28.178917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.184329958s)

                                                
                                                
-- stdout --
	* [newest-cni-091000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-091000 in cluster newest-cni-091000
	* Restarting existing qemu2 VM for "newest-cni-091000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-091000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:04:05.461834    4945 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:04:05.461958    4945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:04:05.461960    4945 out.go:309] Setting ErrFile to fd 2...
	I0912 15:04:05.461963    4945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:04:05.462082    4945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:04:05.463070    4945 out.go:303] Setting JSON to false
	I0912 15:04:05.478153    4945 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2019,"bootTime":1694554226,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 15:04:05.478242    4945 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 15:04:05.482591    4945 out.go:177] * [newest-cni-091000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 15:04:05.490637    4945 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 15:04:05.494616    4945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 15:04:05.490724    4945 notify.go:220] Checking for updates...
	I0912 15:04:05.501629    4945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 15:04:05.504599    4945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 15:04:05.507587    4945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 15:04:05.510578    4945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 15:04:05.513975    4945 config.go:182] Loaded profile config "newest-cni-091000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:04:05.514236    4945 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 15:04:05.518534    4945 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 15:04:05.525578    4945 start.go:298] selected driver: qemu2
	I0912 15:04:05.525583    4945 start.go:902] validating driver "qemu2" against &{Name:newest-cni-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-091000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:04:05.525637    4945 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 15:04:05.527648    4945 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0912 15:04:05.527673    4945 cni.go:84] Creating CNI manager for ""
	I0912 15:04:05.527681    4945 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 15:04:05.527686    4945 start_flags.go:321] config:
	{Name:newest-cni-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-091000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 15:04:05.531826    4945 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 15:04:05.538555    4945 out.go:177] * Starting control plane node newest-cni-091000 in cluster newest-cni-091000
	I0912 15:04:05.542552    4945 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 15:04:05.542569    4945 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 15:04:05.542588    4945 cache.go:57] Caching tarball of preloaded images
	I0912 15:04:05.542670    4945 preload.go:174] Found /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 15:04:05.542675    4945 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 15:04:05.542741    4945 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/newest-cni-091000/config.json ...
	I0912 15:04:05.543109    4945 start.go:365] acquiring machines lock for newest-cni-091000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:04:05.543134    4945 start.go:369] acquired machines lock for "newest-cni-091000" in 19.667µs
	I0912 15:04:05.543143    4945 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:04:05.543149    4945 fix.go:54] fixHost starting: 
	I0912 15:04:05.543265    4945 fix.go:102] recreateIfNeeded on newest-cni-091000: state=Stopped err=<nil>
	W0912 15:04:05.543273    4945 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:04:05.547613    4945 out.go:177] * Restarting existing qemu2 VM for "newest-cni-091000" ...
	I0912 15:04:05.555612    4945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:fd:d4:cf:b6:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:04:05.557576    4945 main.go:141] libmachine: STDOUT: 
	I0912 15:04:05.557591    4945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:04:05.557617    4945 fix.go:56] fixHost completed within 14.468125ms
	I0912 15:04:05.557623    4945 start.go:83] releasing machines lock for "newest-cni-091000", held for 14.4855ms
	W0912 15:04:05.557629    4945 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:04:05.557670    4945 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:04:05.557675    4945 start.go:703] Will try again in 5 seconds ...
	I0912 15:04:10.559782    4945 start.go:365] acquiring machines lock for newest-cni-091000: {Name:mkae2c526b6796c010696abbd3e5493c0715e218 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 15:04:10.560242    4945 start.go:369] acquired machines lock for "newest-cni-091000" in 357.75µs
	I0912 15:04:10.560381    4945 start.go:96] Skipping create...Using existing machine configuration
	I0912 15:04:10.560405    4945 fix.go:54] fixHost starting: 
	I0912 15:04:10.561270    4945 fix.go:102] recreateIfNeeded on newest-cni-091000: state=Stopped err=<nil>
	W0912 15:04:10.561297    4945 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 15:04:10.568695    4945 out.go:177] * Restarting existing qemu2 VM for "newest-cni-091000" ...
	I0912 15:04:10.572804    4945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:fd:d4:cf:b6:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17194-1051/.minikube/machines/newest-cni-091000/disk.qcow2
	I0912 15:04:10.582177    4945 main.go:141] libmachine: STDOUT: 
	I0912 15:04:10.582251    4945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0912 15:04:10.582339    4945 fix.go:56] fixHost completed within 21.933417ms
	I0912 15:04:10.582359    4945 start.go:83] releasing machines lock for "newest-cni-091000", held for 22.08875ms
	W0912 15:04:10.582548    4945 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-091000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-091000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0912 15:04:10.589791    4945 out.go:177] 
	W0912 15:04:10.593780    4945 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0912 15:04:10.593804    4945 out.go:239] * 
	* 
	W0912 15:04:10.596220    4945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 15:04:10.605721    4945 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-091000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (67.054958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-091000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-091000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-091000 "sudo crictl images -o json": exit status 89 (44.283375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-091000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-091000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-091000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (29.621208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-091000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-091000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-091000 --alsologtostderr -v=1: exit status 89 (41.883417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-091000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 15:04:10.789488    4959 out.go:296] Setting OutFile to fd 1 ...
	I0912 15:04:10.789651    4959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:04:10.789654    4959 out.go:309] Setting ErrFile to fd 2...
	I0912 15:04:10.789656    4959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 15:04:10.789796    4959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 15:04:10.790017    4959 out.go:303] Setting JSON to false
	I0912 15:04:10.790026    4959 mustload.go:65] Loading cluster: newest-cni-091000
	I0912 15:04:10.790220    4959 config.go:182] Loaded profile config "newest-cni-091000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 15:04:10.794996    4959 out.go:177] * The control plane node must be running for this command
	I0912 15:04:10.799058    4959 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-091000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-091000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (30.458334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-091000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (29.206875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-091000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.1/json-events 16.28
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
19 TestBinaryMirror 0.37
30 TestHyperKitDriverInstallOrUpdate 7.87
33 TestErrorSpam/setup 29.96
34 TestErrorSpam/start 0.35
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.65
37 TestErrorSpam/unpause 0.62
38 TestErrorSpam/stop 12.24
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 83.52
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 35.18
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
50 TestFunctional/serial/CacheCmd/cache/add_local 1.22
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.93
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.4
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.53
58 TestFunctional/serial/ExtraConfig 33.18
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.64
61 TestFunctional/serial/LogsFileCmd 0.58
62 TestFunctional/serial/InvalidService 3.63
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 8.96
66 TestFunctional/parallel/DryRun 0.21
67 TestFunctional/parallel/InternationalLanguage 0.1
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 23.99
76 TestFunctional/parallel/SSHCmd 0.13
77 TestFunctional/parallel/CpCmd 0.29
79 TestFunctional/parallel/FileSync 0.07
80 TestFunctional/parallel/CertSync 0.42
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
88 TestFunctional/parallel/License 0.62
89 TestFunctional/parallel/Version/short 0.04
90 TestFunctional/parallel/Version/components 0.17
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
95 TestFunctional/parallel/ImageCommands/ImageBuild 2.13
96 TestFunctional/parallel/ImageCommands/Setup 2.18
97 TestFunctional/parallel/DockerEnv/bash 0.41
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
101 TestFunctional/parallel/ServiceCmd/DeployApp 13.1
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.13
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.01
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.13
114 TestFunctional/parallel/ServiceCmd/List 0.1
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
117 TestFunctional/parallel/ServiceCmd/Format 0.11
118 TestFunctional/parallel/ServiceCmd/URL 0.11
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
126 TestFunctional/parallel/ProfileCmd/profile_list 0.15
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
129 TestFunctional/parallel/MountCmd/specific-port 0.92
130 TestFunctional/parallel/MountCmd/VerifyCleanup 0.6
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 28.63
138 TestImageBuild/serial/NormalBuild 1.61
140 TestImageBuild/serial/BuildWithDockerIgnore 0.12
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
144 TestIngressAddonLegacy/StartLegacyK8sCluster 79.3
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 19.35
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.24
151 TestJSONOutput/start/Command 44.74
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.26
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.23
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.07
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.33
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 65.84
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.14
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
267 TestStartStop/group/no-preload/serial/Stop 0.06
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
284 TestStartStop/group/embed-certs/serial/Stop 0.07
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
289 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-684000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-684000: exit status 85 (95.528125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684000 | jenkins | v1.31.2 | 12 Sep 23 14:42 PDT |          |
	|         | -p download-only-684000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 14:42:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:42:56.737560    1472 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:42:56.737699    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:42:56.737702    1472 out.go:309] Setting ErrFile to fd 2...
	I0912 14:42:56.737705    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:42:56.737812    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	W0912 14:42:56.737886    1472 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: no such file or directory
	I0912 14:42:56.739019    1472 out.go:303] Setting JSON to true
	I0912 14:42:56.755468    1472 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":750,"bootTime":1694554226,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:42:56.755560    1472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:42:56.760997    1472 out.go:97] [download-only-684000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:42:56.763919    1472 out.go:169] MINIKUBE_LOCATION=17194
	W0912 14:42:56.761152    1472 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 14:42:56.761213    1472 notify.go:220] Checking for updates...
	I0912 14:42:56.770986    1472 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:42:56.774011    1472 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:42:56.776978    1472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:42:56.779959    1472 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	W0912 14:42:56.785959    1472 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:42:56.786173    1472 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:42:56.791066    1472 out.go:97] Using the qemu2 driver based on user configuration
	I0912 14:42:56.791085    1472 start.go:298] selected driver: qemu2
	I0912 14:42:56.791088    1472 start.go:902] validating driver "qemu2" against <nil>
	I0912 14:42:56.791138    1472 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 14:42:56.794917    1472 out.go:169] Automatically selected the socket_vmnet network
	I0912 14:42:56.800497    1472 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0912 14:42:56.800591    1472 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 14:42:56.800648    1472 cni.go:84] Creating CNI manager for ""
	I0912 14:42:56.800665    1472 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 14:42:56.800669    1472 start_flags.go:321] config:
	{Name:download-only-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:42:56.806077    1472 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:42:56.809997    1472 out.go:97] Downloading VM boot image ...
	I0912 14:42:56.810016    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/iso/arm64/minikube-v1.31.0-1694468241-17194-arm64.iso
	I0912 14:43:11.357895    1472 out.go:97] Starting control plane node download-only-684000 in cluster download-only-684000
	I0912 14:43:11.357914    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:11.470067    1472 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 14:43:11.470079    1472 cache.go:57] Caching tarball of preloaded images
	I0912 14:43:11.470313    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:11.473020    1472 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0912 14:43:11.473030    1472 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:11.693035    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0912 14:43:24.036976    1472 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:24.037096    1472 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:24.679356    1472 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0912 14:43:24.679555    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/download-only-684000/config.json ...
	I0912 14:43:24.679572    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/download-only-684000/config.json: {Name:mk08a8eacb95eb27dd883eabd39b74e7ba802715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 14:43:24.679786    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0912 14:43:24.680001    1472 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0912 14:43:25.030007    1472 out.go:169] 
	W0912 14:43:25.031919    1472 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20 0x107aa7b20] Decompressors:map[bz2:0x140001c33f0 gz:0x140001c33f8 tar:0x140001c33a0 tar.bz2:0x140001c33b0 tar.gz:0x140001c33c0 tar.xz:0x140001c33d0 tar.zst:0x140001c33e0 tbz2:0x140001c33b0 tgz:0x140001c33c0 txz:0x140001c33d0 tzst:0x140001c33e0 xz:0x140001c3400 zip:0x140001c3410 zst:0x140001c3408] Getters:map[file:0x14000708030 http:0x14000c84aa0 https:0x14000c84af0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0912 14:43:25.031946    1472 out_reason.go:110] 
	W0912 14:43:25.038983    1472 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 14:43:25.042951    1472 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (16.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (16.281756208s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (16.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-684000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-684000: exit status 85 (73.736666ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684000 | jenkins | v1.31.2 | 12 Sep 23 14:42 PDT |          |
	|         | -p download-only-684000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-684000 | jenkins | v1.31.2 | 12 Sep 23 14:43 PDT |          |
	|         | -p download-only-684000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 14:43:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 14:43:25.233681    1489 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:43:25.233852    1489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:43:25.233855    1489 out.go:309] Setting ErrFile to fd 2...
	I0912 14:43:25.233858    1489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:43:25.233981    1489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	W0912 14:43:25.234048    1489 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17194-1051/.minikube/config/config.json: no such file or directory
	I0912 14:43:25.234957    1489 out.go:303] Setting JSON to true
	I0912 14:43:25.250023    1489 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":779,"bootTime":1694554226,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:43:25.250114    1489 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:43:25.255231    1489 out.go:97] [download-only-684000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:43:25.259234    1489 out.go:169] MINIKUBE_LOCATION=17194
	I0912 14:43:25.255312    1489 notify.go:220] Checking for updates...
	I0912 14:43:25.265292    1489 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:43:25.268320    1489 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:43:25.271249    1489 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:43:25.274267    1489 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	W0912 14:43:25.280197    1489 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 14:43:25.280479    1489 config.go:182] Loaded profile config "download-only-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0912 14:43:25.280520    1489 start.go:810] api.Load failed for download-only-684000: filestore "download-only-684000": Docker machine "download-only-684000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 14:43:25.280591    1489 driver.go:373] Setting default libvirt URI to qemu:///system
	W0912 14:43:25.280607    1489 start.go:810] api.Load failed for download-only-684000: filestore "download-only-684000": Docker machine "download-only-684000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 14:43:25.284239    1489 out.go:97] Using the qemu2 driver based on existing profile
	I0912 14:43:25.284250    1489 start.go:298] selected driver: qemu2
	I0912 14:43:25.284252    1489 start.go:902] validating driver "qemu2" against &{Name:download-only-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:43:25.286155    1489 cni.go:84] Creating CNI manager for ""
	I0912 14:43:25.286173    1489 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 14:43:25.286180    1489 start_flags.go:321] config:
	{Name:download-only-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-684000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:43:25.290127    1489 iso.go:125] acquiring lock: {Name:mkad84685fe1f07588f3192605ed453618e0f4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 14:43:25.293297    1489 out.go:97] Starting control plane node download-only-684000 in cluster download-only-684000
	I0912 14:43:25.293305    1489 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:43:25.519238    1489 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:43:25.519291    1489 cache.go:57] Caching tarball of preloaded images
	I0912 14:43:25.520048    1489 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:43:25.525694    1489 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0912 14:43:25.525721    1489 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:25.737761    1489 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0912 14:43:35.467867    1489 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:35.468018    1489 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0912 14:43:36.048993    1489 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0912 14:43:36.049053    1489 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/download-only-684000/config.json ...
	I0912 14:43:36.049300    1489 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0912 14:43:36.049458    1489 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17194-1051/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-684000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-594000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-594000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-594000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.87s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.87s)

                                                
                                    
x
+
TestErrorSpam/setup (29.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-790000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-790000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 --driver=qemu2 : (29.96464625s)
--- PASS: TestErrorSpam/setup (29.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (12.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 stop: (12.078815917s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-790000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-790000 stop
--- PASS: TestErrorSpam/stop (12.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17194-1051/.minikube/files/etc/test/nested/copy/1470/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m23.515642833s)
--- PASS: TestFunctional/serial/StartWithProxy (83.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --alsologtostderr -v=8: (35.178582916s)
functional_test.go:659: soft start took 35.17896225s for "functional-737000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-737000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.1: (1.233784541s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.3: (1.204497583s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:latest: (1.092782458s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2369189045/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add minikube-local-cache-test:functional-737000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache delete minikube-local-cache-test:functional-737000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-737000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (71.705834ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 kubectl -- --context functional-737000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-737000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.18104775s)
functional_test.go:757: restart took 33.1811585s for "functional-737000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-737000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1599540085/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-737000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-737000: exit status 115 (109.5165ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31994 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-737000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 config get cpus: exit status 14 (29.851875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 config get cpus: exit status 14 (28.869417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-737000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-737000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2181: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.913ms)

                                                
                                                
-- stdout --
	* [functional-737000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:48:50.543510    2164 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:48:50.543639    2164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.543641    2164 out.go:309] Setting ErrFile to fd 2...
	I0912 14:48:50.543644    2164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.543787    2164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:48:50.544844    2164 out.go:303] Setting JSON to false
	I0912 14:48:50.561793    2164 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1104,"bootTime":1694554226,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:48:50.561857    2164 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:48:50.566203    2164 out.go:177] * [functional-737000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0912 14:48:50.574209    2164 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:48:50.574295    2164 notify.go:220] Checking for updates...
	I0912 14:48:50.577251    2164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:48:50.581249    2164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:48:50.584159    2164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:48:50.587187    2164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:48:50.590258    2164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:48:50.593501    2164 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:48:50.593787    2164 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:48:50.598249    2164 out.go:177] * Using the qemu2 driver based on existing profile
	I0912 14:48:50.605127    2164 start.go:298] selected driver: qemu2
	I0912 14:48:50.605131    2164 start.go:902] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:48:50.605181    2164 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:48:50.611201    2164 out.go:177] 
	W0912 14:48:50.615041    2164 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 14:48:50.619185    2164 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (102.8435ms)

                                                
                                                
-- stdout --
	* [functional-737000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 14:48:50.747697    2175 out.go:296] Setting OutFile to fd 1 ...
	I0912 14:48:50.747801    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.747804    2175 out.go:309] Setting ErrFile to fd 2...
	I0912 14:48:50.747806    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 14:48:50.747923    2175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
	I0912 14:48:50.749318    2175 out.go:303] Setting JSON to false
	I0912 14:48:50.765366    2175 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1104,"bootTime":1694554226,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0912 14:48:50.765485    2175 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0912 14:48:50.769267    2175 out.go:177] * [functional-737000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0912 14:48:50.774208    2175 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 14:48:50.778201    2175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	I0912 14:48:50.774243    2175 notify.go:220] Checking for updates...
	I0912 14:48:50.779293    2175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0912 14:48:50.782173    2175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 14:48:50.785223    2175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	I0912 14:48:50.788213    2175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 14:48:50.791514    2175 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0912 14:48:50.791801    2175 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 14:48:50.796227    2175 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0912 14:48:50.805225    2175 start.go:298] selected driver: qemu2
	I0912 14:48:50.805234    2175 start.go:902] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 14:48:50.805296    2175 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 14:48:50.812215    2175 out.go:177] 
	W0912 14:48:50.816187    2175 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 14:48:50.819194    2175 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4f73c804-c9d3-44a2-b8f6-31c3ae956271] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007109041s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-737000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-737000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1efcd0cc-725f-41e2-b9b5-e761ba50022b] Pending
helpers_test.go:344: "sp-pod" [1efcd0cc-725f-41e2-b9b5-e761ba50022b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1efcd0cc-725f-41e2-b9b5-e761ba50022b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009560167s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-737000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-737000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e60c1e15-ea6e-44a4-a4ad-f4cef9524b30] Pending
helpers_test.go:344: "sp-pod" [e60c1e15-ea6e-44a4-a4ad-f4cef9524b30] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e60c1e15-ea6e-44a4-a4ad-f4cef9524b30] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008083791s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-737000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -n functional-737000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cp functional-737000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd161190003/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -n functional-737000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1470/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/test/nested/copy/1470/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1470.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/1470.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1470.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /usr/share/ca-certificates/1470.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/14702.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /usr/share/ca-certificates/14702.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-737000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "sudo systemctl is-active crio": exit status 1 (65.930417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-737000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-737000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr:
I0912 14:48:52.955458    2203 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:52.955656    2203 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:52.955660    2203 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:52.955662    2203 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:52.955825    2203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:52.956298    2203 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:52.956363    2203 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:52.957213    2203 ssh_runner.go:195] Run: systemctl --version
I0912 14:48:52.957223    2203 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/id_rsa Username:docker}
I0912 14:48:52.988779    2203 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/google-containers/addon-resizer      | functional-737000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | 91582cfffc2d0 | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-737000 | 207f9c516c42b | 30B    |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/localhost/my-image                | functional-737000 | 0444ad4160179 | 1.41MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr:
I0912 14:48:55.313546    2215 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:55.313710    2215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:55.313713    2215 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:55.313716    2215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:55.313841    2215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:55.314295    2215 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:55.314358    2215 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:55.315182    2215 ssh_runner.go:195] Run: systemctl --version
I0912 14:48:55.315194    2215 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/id_rsa Username:docker}
I0912 14:48:55.345993    2215 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/09/12 14:48:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr:
[{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-737000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0444ad4160179d5da1ab96042b78409fe6fd6e7c4b4620374ba4dfb5f7af9a57","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-737000"],"size":"1410000"},{"id":"fa0c6bb795403f8762e5
cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry
.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"207f9c516c42ba50dac51a306129e5a54bc934c72b370fb126045f1771d45f3b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-737000"],"size":"30"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"s
ize":"525000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr:
I0912 14:48:55.237107    2213 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:55.237267    2213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:55.237270    2213 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:55.237272    2213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:55.237398    2213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:55.237848    2213 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:55.237912    2213 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:55.238817    2213 ssh_runner.go:195] Run: systemctl --version
I0912 14:48:55.238827    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/id_rsa Username:docker}
I0912 14:48:55.269711    2213 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr:
- id: 207f9c516c42ba50dac51a306129e5a54bc934c72b370fb126045f1771d45f3b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-737000
size: "30"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-737000
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr:
I0912 14:48:53.033808    2205 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:53.033986    2205 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:53.033989    2205 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:53.033991    2205 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:53.034112    2205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:53.034563    2205 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:53.034627    2205 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:53.035528    2205 ssh_runner.go:195] Run: systemctl --version
I0912 14:48:53.035541    2205 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/id_rsa Username:docker}
I0912 14:48:53.066136    2205 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh pgrep buildkitd: exit status 1 (63.185292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr: (1.984600541s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 759712537c75
Removing intermediate container 759712537c75
---> ac951aefa4e2
Step 3/3 : ADD content.txt /
---> 0444ad416017
Successfully built 0444ad416017
Successfully tagged localhost/my-image:functional-737000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr:
I0912 14:48:53.174026    2209 out.go:296] Setting OutFile to fd 1 ...
I0912 14:48:53.174242    2209 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:53.174247    2209 out.go:309] Setting ErrFile to fd 2...
I0912 14:48:53.174249    2209 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 14:48:53.174377    2209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17194-1051/.minikube/bin
I0912 14:48:53.174829    2209 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:53.175254    2209 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0912 14:48:53.176111    2209 ssh_runner.go:195] Run: systemctl --version
I0912 14:48:53.176121    2209 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17194-1051/.minikube/machines/functional-737000/id_rsa Username:docker}
I0912 14:48:53.209380    2209 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.70507441.tar
I0912 14:48:53.209454    2209 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 14:48:53.212365    2209 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.70507441.tar
I0912 14:48:53.213744    2209 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.70507441.tar: stat -c "%s %y" /var/lib/minikube/build/build.70507441.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.70507441.tar': No such file or directory
I0912 14:48:53.213758    2209 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.70507441.tar --> /var/lib/minikube/build/build.70507441.tar (3072 bytes)
I0912 14:48:53.223093    2209 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.70507441
I0912 14:48:53.225877    2209 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.70507441 -xf /var/lib/minikube/build/build.70507441.tar
I0912 14:48:53.228427    2209 docker.go:339] Building image: /var/lib/minikube/build/build.70507441
I0912 14:48:53.228468    2209 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-737000 /var/lib/minikube/build/build.70507441
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0912 14:48:55.119229    2209 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-737000 /var/lib/minikube/build/build.70507441: (1.890786666s)
I0912 14:48:55.119286    2209 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.70507441
I0912 14:48:55.122535    2209 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.70507441.tar
I0912 14:48:55.125176    2209 build_images.go:207] Built localhost/my-image:functional-737000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.70507441.tar
I0912 14:48:55.125193    2209 build_images.go:123] succeeded building to: functional-737000
I0912 14:48:55.125196    2209 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.122117625s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-737000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-737000 docker-env) && out/minikube-darwin-arm64 status -p functional-737000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-737000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-737000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-737000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-kg9rn" [e7dada1b-a3db-48b5-b371-7f0d30ea3ffe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-kg9rn" [e7dada1b-a3db-48b5-b371-7f0d30ea3ffe] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.013241958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr: (2.050063958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr: (1.455313291s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.032738125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-737000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image load --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr: (1.851925s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image save gcr.io/google-containers/addon-resizer:functional-737000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image rm gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-737000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image save --daemon gcr.io/google-containers/addon-resizer:functional-737000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-737000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4e7bbf91-b334-47c5-873c-b1920dbcca02] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4e7bbf91-b334-47c5-873c-b1920dbcca02] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.010888625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service list -o json
functional_test.go:1493: Took "92.165167ms" to run "out/minikube-darwin-arm64 -p functional-737000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:30282
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:30282
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-737000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.63.18 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "116.305292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.826083ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.179291ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.666083ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2129355538/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.864584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2129355538/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p": exit status 1 (66.301625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-737000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2129355538/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1: exit status 1 (70.736375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-737000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2149572816/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-737000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-737000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-737000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-477000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-477000 --driver=qemu2 : (28.626808s)
--- PASS: TestImageBuild/serial/Setup (28.63s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-477000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-477000: (1.608675291s)
--- PASS: TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-477000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-477000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (79.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-627000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-627000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m19.294938833s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (79.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons enable ingress --alsologtostderr -v=5: (19.351947083s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-627000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-484000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-484000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (44.743560958s)
--- PASS: TestJSONOutput/start/Command (44.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.26s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-484000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.26s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-484000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-484000 --output=json --user=testUser
E0912 14:52:56.758103    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:56.765528    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:56.777625    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:56.799746    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:56.841861    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:56.923916    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:57.086029    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:57.408170    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:58.050257    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:52:59.332182    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:53:01.893416    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-484000 --output=json --user=testUser: (12.073575042s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-554000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-554000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.890542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad38d7cb-5a47-4b72-b4d0-a84047691ed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-554000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e28fe819-07b3-45b0-ac5a-cc5648d1f4cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17194"}}
	{"specversion":"1.0","id":"6555a696-30b2-4e49-8987-3dba6805a157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig"}}
	{"specversion":"1.0","id":"6b8b058e-60bf-462c-851b-030a38fe9cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d425a176-c324-463c-8c9d-ce5a4295f745","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94b3ef83-64a6-4dc5-9d20-858c8f479f70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube"}}
	{"specversion":"1.0","id":"4d11f4d5-3426-4d43-b8d0-f97cd0dfd358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33dca71c-8bb9-4624-acb4-42482f7bd0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-554000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (65.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-236000 --driver=qemu2 
E0912 14:53:07.015567    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
E0912 14:53:17.255872    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-236000 --driver=qemu2 : (29.277023792s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-238000 --driver=qemu2 
E0912 14:53:37.735981    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17194-1051/.minikube/profiles/functional-737000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-238000 --driver=qemu2 : (35.764814542s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-236000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-238000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-238000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-238000
helpers_test.go:175: Cleaning up "first-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-236000
--- PASS: TestMinikubeProfile (65.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-647000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.69175ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-647000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17194
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17194-1051/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17194-1051/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-647000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-647000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.32375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-647000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-647000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-647000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-647000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.385209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-647000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-128000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-128000 -n old-k8s-version-128000: exit status 7 (28.349291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-128000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-981000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (29.314042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-981000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-280000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-280000 -n embed-certs-280000: exit status 7 (28.398666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-280000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-803000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-803000 -n default-k8s-diff-port-803000: exit status 7 (29.7645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-803000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-091000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-091000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-091000 -n newest-cni-091000: exit status 7 (30.103041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-091000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port28623990/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694555315030373000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port28623990/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694555315030373000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port28623990/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694555315030373000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port28623990/001/test-1694555315030373000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.079541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.945917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.51925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.540833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.482917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.788125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.361708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.370542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p": exit status 1 (62.619167ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-737000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port28623990/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.96s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-786000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-786000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-786000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-786000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-786000"

                                                
                                                
----------------------- debugLogs end: cilium-786000 [took: 2.313293292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-786000
--- SKIP: TestNetworkPlugins/group/cilium (2.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-259000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard