Test Report: QEMU_macOS 17488

                    
                      292152b7ba2fff47063f7712cda18987a57d80fb:2023-10-25:31605
                    
                

Test fail (87/259)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.45
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.94
28 TestAddons/parallel/Ingress 33.6
42 TestCertOptions 10.08
43 TestCertExpiration 195.24
44 TestDockerFlags 10.15
45 TestForceSystemdFlag 11.83
46 TestForceSystemdEnv 10.12
52 TestErrorSpam/setup 18.65
91 TestFunctional/parallel/ServiceCmdConnect 30.93
158 TestImageBuild/serial/BuildWithBuildArg 1.1
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 50.91
202 TestMountStart/serial/StartWithMountFirst 10.02
205 TestMultiNode/serial/FreshStart2Nodes 10.09
206 TestMultiNode/serial/DeployApp2Nodes 91.72
207 TestMultiNode/serial/PingHostFrom2Pods 0.09
208 TestMultiNode/serial/AddNode 0.08
209 TestMultiNode/serial/ProfileList 0.11
210 TestMultiNode/serial/CopyFile 0.06
211 TestMultiNode/serial/StopNode 0.15
212 TestMultiNode/serial/StartAfterStop 0.11
213 TestMultiNode/serial/RestartKeepsNodes 5.38
214 TestMultiNode/serial/DeleteNode 0.11
215 TestMultiNode/serial/StopMultiNode 0.16
216 TestMultiNode/serial/RestartMultiNode 5.25
217 TestMultiNode/serial/ValidateNameConflict 20.11
221 TestPreload 9.97
223 TestScheduledStopUnix 9.9
224 TestSkaffold 12.07
227 TestRunningBinaryUpgrade 157.21
229 TestKubernetesUpgrade 15.34
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.5
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.4
244 TestStoppedBinaryUpgrade/Setup 157.28
246 TestPause/serial/Start 9.89
256 TestNoKubernetes/serial/StartWithK8s 9.89
257 TestNoKubernetes/serial/StartWithStopK8s 5.32
258 TestNoKubernetes/serial/Start 5.32
262 TestNoKubernetes/serial/StartNoArgs 5.34
264 TestNetworkPlugins/group/auto/Start 9.76
265 TestNetworkPlugins/group/flannel/Start 9.76
266 TestNetworkPlugins/group/kindnet/Start 9.85
267 TestNetworkPlugins/group/enable-default-cni/Start 9.72
268 TestNetworkPlugins/group/bridge/Start 9.8
269 TestNetworkPlugins/group/kubenet/Start 9.86
270 TestNetworkPlugins/group/custom-flannel/Start 9.77
271 TestNetworkPlugins/group/calico/Start 9.96
272 TestNetworkPlugins/group/false/Start 9.87
274 TestStartStop/group/old-k8s-version/serial/FirstStart 12.14
275 TestStoppedBinaryUpgrade/Upgrade 2.73
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
278 TestStartStop/group/no-preload/serial/FirstStart 9.9
279 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
283 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.29
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/old-k8s-version/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 10.16
295 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
297 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
298 TestStartStop/group/no-preload/serial/Pause 0.11
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.97
301 TestStartStop/group/embed-certs/serial/DeployApp 0.09
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
305 TestStartStop/group/embed-certs/serial/SecondStart 5.22
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.28
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.1
316 TestStartStop/group/newest-cni/serial/FirstStart 9.83
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/SecondStart 5.26
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
329 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (14.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-774000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-774000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.447107s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"97bd9daa-3c47-4136-91d2-7d2b66541e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-774000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5479907-95cf-4123-bff9-d5e07948f1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"395f3137-9c7b-45a8-b05c-4ac4e346b0cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig"}}
	{"specversion":"1.0","id":"a589d43e-c196-41ba-99a4-75ab78ddc82e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3f7f387a-8b8b-4dcf-833b-e64038cc9c19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3457bed8-eb1f-4b5d-b9fd-392367e702a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube"}}
	{"specversion":"1.0","id":"cc0840ab-4e3a-449e-bf81-dc5516a95493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c1af2961-0ccc-4d28-be61-5bb3507302a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e74ea9c-e3f7-4f06-856a-e623b1b06e64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b5d259d1-dc94-4e5a-a89c-b31682123fdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc26a958-a3fc-4ea2-8aba-b9eee55ed5ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-774000 in cluster download-only-774000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"26230cb5-72b3-4d5e-b3c5-c68fb829e3b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"65264173-8b86-4553-b285-f5a682d583ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0] Decompressors:map[bz2:0x14000680000 gz:0x14000680008 tar:0x1400000ffb0 tar.bz2:0x1400000ffc0 tar.gz:0x1400000ffd0 tar.xz:0x1400000ffe0 tar.zst:0x1400000fff0 tbz2:0x1400000ffc0 tgz:0x140000
0ffd0 txz:0x1400000ffe0 tzst:0x1400000fff0 xz:0x14000680010 zip:0x14000680020 zst:0x14000680018] Getters:map[file:0x140004a5650 http:0x1400051e140 https:0x1400051e190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f104c015-7bd3-4085-a3a6-bab61f283f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:10:09.184746    1725 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:10:09.184944    1725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:09.184947    1725 out.go:309] Setting ErrFile to fd 2...
	I1025 14:10:09.184949    1725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:09.185073    1725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	W1025 14:10:09.185167    1725 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: no such file or directory
	I1025 14:10:09.186290    1725 out.go:303] Setting JSON to true
	I1025 14:10:09.204424    1725 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":583,"bootTime":1698267626,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:10:09.204503    1725 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:10:09.213372    1725 out.go:97] [download-only-774000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:10:09.217367    1725 out.go:169] MINIKUBE_LOCATION=17488
	W1025 14:10:09.213498    1725 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 14:10:09.213524    1725 notify.go:220] Checking for updates...
	I1025 14:10:09.227373    1725 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:10:09.235313    1725 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:10:09.242382    1725 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:10:09.249386    1725 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	W1025 14:10:09.257275    1725 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 14:10:09.257481    1725 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:10:09.262374    1725 out.go:97] Using the qemu2 driver based on user configuration
	I1025 14:10:09.262381    1725 start.go:298] selected driver: qemu2
	I1025 14:10:09.262395    1725 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:10:09.262453    1725 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:10:09.266287    1725 out.go:169] Automatically selected the socket_vmnet network
	I1025 14:10:09.273752    1725 start_flags.go:386] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 14:10:09.273838    1725 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:10:09.273937    1725 cni.go:84] Creating CNI manager for ""
	I1025 14:10:09.273957    1725 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:10:09.273966    1725 start_flags.go:323] config:
	{Name:download-only-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:09.280796    1725 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:10:09.285381    1725 out.go:97] Downloading VM boot image ...
	I1025 14:10:09.285396    1725 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso
	I1025 14:10:14.171529    1725 out.go:97] Starting control plane node download-only-774000 in cluster download-only-774000
	I1025 14:10:14.171567    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:14.228275    1725 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:10:14.228290    1725 cache.go:56] Caching tarball of preloaded images
	I1025 14:10:14.228471    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:14.233081    1725 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 14:10:14.233088    1725 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:14.309723    1725 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:10:22.168337    1725 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:22.168463    1725 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:22.810927    1725 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:10:22.811130    1725 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/download-only-774000/config.json ...
	I1025 14:10:22.811147    1725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/download-only-774000/config.json: {Name:mkb88c3470620066988bab56fb499300b62e0198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:22.811357    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:22.811518    1725 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1025 14:10:23.555598    1725 out.go:169] 
	W1025 14:10:23.560697    1725 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0] Decompressors:map[bz2:0x14000680000 gz:0x14000680008 tar:0x1400000ffb0 tar.bz2:0x1400000ffc0 tar.gz:0x1400000ffd0 tar.xz:0x1400000ffe0 tar.zst:0x1400000fff0 tbz2:0x1400000ffc0 tgz:0x1400000ffd0 txz:0x1400000ffe0 tzst:0x1400000fff0 xz:0x14000680010 zip:0x14000680020 zst:0x14000680018] Getters:map[file:0x140004a5650 http:0x1400051e140 https:0x1400051e190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 14:10:23.560726    1725 out_reason.go:110] 
	W1025 14:10:23.567550    1725 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:10:23.571584    1725 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-774000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (14.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:163: expected the file for binary exist at "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-032000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-032000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.788822167s)

                                                
                                                
-- stdout --
	* [offline-docker-032000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-032000 in cluster offline-docker-032000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:28:00.539403    3620 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:28:00.539556    3620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:00.539559    3620 out.go:309] Setting ErrFile to fd 2...
	I1025 14:28:00.539562    3620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:00.539690    3620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:28:00.540759    3620 out.go:303] Setting JSON to false
	I1025 14:28:00.558359    3620 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1654,"bootTime":1698267626,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:28:00.558457    3620 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:28:00.563947    3620 out.go:177] * [offline-docker-032000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:28:00.571807    3620 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:28:00.571845    3620 notify.go:220] Checking for updates...
	I1025 14:28:00.577793    3620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:28:00.580806    3620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:28:00.583744    3620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:28:00.586798    3620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:28:00.589699    3620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:28:00.593132    3620 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:00.593192    3620 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:28:00.596830    3620 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:28:00.603813    3620 start.go:298] selected driver: qemu2
	I1025 14:28:00.603822    3620 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:28:00.603829    3620 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:28:00.605831    3620 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:28:00.608780    3620 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:28:00.610108    3620 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:28:00.610130    3620 cni.go:84] Creating CNI manager for ""
	I1025 14:28:00.610136    3620 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:28:00.610141    3620 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:28:00.610146    3620 start_flags.go:323] config:
	{Name:offline-docker-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:offline-docker-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:28:00.614647    3620 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:28:00.621816    3620 out.go:177] * Starting control plane node offline-docker-032000 in cluster offline-docker-032000
	I1025 14:28:00.625732    3620 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:28:00.625760    3620 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:28:00.625770    3620 cache.go:56] Caching tarball of preloaded images
	I1025 14:28:00.625859    3620 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:28:00.625864    3620 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:28:00.625933    3620 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/offline-docker-032000/config.json ...
	I1025 14:28:00.625943    3620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/offline-docker-032000/config.json: {Name:mkd0193c97019d614d351d97d554655390ae73ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:28:00.626197    3620 start.go:365] acquiring machines lock for offline-docker-032000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:00.626225    3620 start.go:369] acquired machines lock for "offline-docker-032000" in 21.917µs
	I1025 14:28:00.626235    3620 start.go:93] Provisioning new machine with config: &{Name:offline-docker-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.3 ClusterName:offline-docker-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:00.626267    3620 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:00.630798    3620 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:00.645807    3620 start.go:159] libmachine.API.Create for "offline-docker-032000" (driver="qemu2")
	I1025 14:28:00.645829    3620 client.go:168] LocalClient.Create starting
	I1025 14:28:00.645915    3620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:00.645943    3620 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:00.645954    3620 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:00.645989    3620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:00.646006    3620 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:00.646014    3620 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:00.646354    3620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:00.768928    3620 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:00.834330    3620 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:00.834341    3620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:00.834530    3620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:00.859564    3620 main.go:141] libmachine: STDOUT: 
	I1025 14:28:00.859583    3620 main.go:141] libmachine: STDERR: 
	I1025 14:28:00.859640    3620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2 +20000M
	I1025 14:28:00.871240    3620 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:00.871256    3620 main.go:141] libmachine: STDERR: 
	I1025 14:28:00.871283    3620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:00.871293    3620 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:00.871333    3620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8b:08:ea:e8:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:00.873386    3620 main.go:141] libmachine: STDOUT: 
	I1025 14:28:00.873401    3620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:00.873420    3620 client.go:171] LocalClient.Create took 227.586ms
	I1025 14:28:02.875488    3620 start.go:128] duration metric: createHost completed in 2.249212917s
	I1025 14:28:02.875507    3620 start.go:83] releasing machines lock for "offline-docker-032000", held for 2.249277042s
	W1025 14:28:02.875517    3620 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:02.884998    3620 out.go:177] * Deleting "offline-docker-032000" in qemu2 ...
	W1025 14:28:02.893449    3620 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:02.893456    3620 start.go:706] Will try again in 5 seconds ...
	I1025 14:28:07.895640    3620 start.go:365] acquiring machines lock for offline-docker-032000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:07.895981    3620 start.go:369] acquired machines lock for "offline-docker-032000" in 256.208µs
	I1025 14:28:07.896109    3620 start.go:93] Provisioning new machine with config: &{Name:offline-docker-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.3 ClusterName:offline-docker-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:07.896422    3620 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:07.901301    3620 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:07.947547    3620 start.go:159] libmachine.API.Create for "offline-docker-032000" (driver="qemu2")
	I1025 14:28:07.947588    3620 client.go:168] LocalClient.Create starting
	I1025 14:28:07.947711    3620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:07.947768    3620 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:07.947791    3620 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:07.947850    3620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:07.947889    3620 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:07.947910    3620 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:07.948409    3620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:08.082060    3620 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:08.236627    3620 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:08.236640    3620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:08.236819    3620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:08.249250    3620 main.go:141] libmachine: STDOUT: 
	I1025 14:28:08.249267    3620 main.go:141] libmachine: STDERR: 
	I1025 14:28:08.249345    3620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2 +20000M
	I1025 14:28:08.259722    3620 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:08.259746    3620 main.go:141] libmachine: STDERR: 
	I1025 14:28:08.259767    3620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:08.259772    3620 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:08.259818    3620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5f:33:e9:3d:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/offline-docker-032000/disk.qcow2
	I1025 14:28:08.261511    3620 main.go:141] libmachine: STDOUT: 
	I1025 14:28:08.261528    3620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:08.261547    3620 client.go:171] LocalClient.Create took 313.951375ms
	I1025 14:28:10.263672    3620 start.go:128] duration metric: createHost completed in 2.367227208s
	I1025 14:28:10.263716    3620 start.go:83] releasing machines lock for "offline-docker-032000", held for 2.36771125s
	W1025 14:28:10.263950    3620 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:10.270297    3620 out.go:177] 
	W1025 14:28:10.274361    3620 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:28:10.274383    3620 out.go:239] * 
	* 
	W1025 14:28:10.276994    3620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:28:10.285244    3620 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-032000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-10-25 14:28:10.298554 -0700 PDT m=+1081.165465293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-032000 -n offline-docker-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-032000 -n offline-docker-032000: exit status 7 (46.438125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-032000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-032000
--- FAIL: TestOffline (9.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (33.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-355000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-355000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-355000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6cdf30af-1451-4a65-b17a-ebcb7e4818a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6cdf30af-1451-4a65-b17a-ebcb7e4818a1] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.007686959s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-355000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.034252084s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-arm64 -p addons-355000 addons disable ingress --alsologtostderr -v=1: (7.214941583s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-355000 -n addons-355000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |                     |
	|         | -p download-only-774000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |                     |
	|         | -p download-only-774000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT | 25 Oct 23 14:10 PDT |
	| delete  | -p download-only-774000                                                                     | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT | 25 Oct 23 14:10 PDT |
	| delete  | -p download-only-774000                                                                     | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT | 25 Oct 23 14:10 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-272000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |                     |
	|         | binary-mirror-272000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49314                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-272000                                                                     | binary-mirror-272000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT | 25 Oct 23 14:10 PDT |
	| addons  | disable dashboard -p                                                                        | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |                     |
	|         | addons-355000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |                     |
	|         | addons-355000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-355000 --wait=true                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT | 25 Oct 23 14:12 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-355000 ip                                                                            | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:12 PDT | 25 Oct 23 14:12 PDT |
	| addons  | addons-355000 addons disable                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:12 PDT | 25 Oct 23 14:12 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:12 PDT | 25 Oct 23 14:12 PDT |
	|         | -p addons-355000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-355000 ssh cat                                                                       | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | /opt/local-path-provisioner/pvc-cc6901cf-d9fc-4f53-8183-180e9a68fcdf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-355000 addons disable                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | addons-355000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | -p addons-355000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355000 addons                                                                        | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355000 addons                                                                        | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-355000 addons                                                                        | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | addons-355000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-355000 ssh curl -s                                                                   | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-355000 ip                                                                            | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	| addons  | addons-355000 addons disable                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:13 PDT |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-355000 addons disable                                                                | addons-355000        | jenkins | v1.31.2 | 25 Oct 23 14:13 PDT | 25 Oct 23 14:14 PDT |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:10:32
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:10:32.017972    1803 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:10:32.018113    1803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:32.018116    1803 out.go:309] Setting ErrFile to fd 2...
	I1025 14:10:32.018119    1803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:32.018252    1803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:10:32.019321    1803 out.go:303] Setting JSON to false
	I1025 14:10:32.035664    1803 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":606,"bootTime":1698267626,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:10:32.035759    1803 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:10:32.040216    1803 out.go:177] * [addons-355000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:10:32.046202    1803 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:10:32.046276    1803 notify.go:220] Checking for updates...
	I1025 14:10:32.053177    1803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:10:32.056130    1803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:10:32.059150    1803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:10:32.062261    1803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:10:32.063708    1803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:10:32.067385    1803 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:10:32.071161    1803 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:10:32.077196    1803 start.go:298] selected driver: qemu2
	I1025 14:10:32.077204    1803 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:10:32.077211    1803 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:10:32.079500    1803 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:10:32.083184    1803 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:10:32.086328    1803 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:10:32.086360    1803 cni.go:84] Creating CNI manager for ""
	I1025 14:10:32.086368    1803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:10:32.086373    1803 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:10:32.086388    1803 start_flags.go:323] config:
	{Name:addons-355000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-355000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:32.091125    1803 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:10:32.095154    1803 out.go:177] * Starting control plane node addons-355000 in cluster addons-355000
	I1025 14:10:32.103127    1803 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:10:32.103150    1803 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:10:32.103159    1803 cache.go:56] Caching tarball of preloaded images
	I1025 14:10:32.103225    1803 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:10:32.103232    1803 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:10:32.103448    1803 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/config.json ...
	I1025 14:10:32.103460    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/config.json: {Name:mk103d21d766ac7e16dc83af30e86520109296cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:32.103692    1803 start.go:365] acquiring machines lock for addons-355000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:10:32.103821    1803 start.go:369] acquired machines lock for "addons-355000" in 123.666µs
	I1025 14:10:32.103831    1803 start.go:93] Provisioning new machine with config: &{Name:addons-355000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:addons-355000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:10:32.103867    1803 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:10:32.112176    1803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 14:10:32.342673    1803 start.go:159] libmachine.API.Create for "addons-355000" (driver="qemu2")
	I1025 14:10:32.342701    1803 client.go:168] LocalClient.Create starting
	I1025 14:10:32.342928    1803 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:10:32.407380    1803 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:10:32.524976    1803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:10:32.815839    1803 main.go:141] libmachine: Creating SSH key...
	I1025 14:10:33.052943    1803 main.go:141] libmachine: Creating Disk image...
	I1025 14:10:33.052954    1803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:10:33.057298    1803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2
	I1025 14:10:33.100907    1803 main.go:141] libmachine: STDOUT: 
	I1025 14:10:33.100937    1803 main.go:141] libmachine: STDERR: 
	I1025 14:10:33.101004    1803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2 +20000M
	I1025 14:10:33.111747    1803 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:10:33.111761    1803 main.go:141] libmachine: STDERR: 
	I1025 14:10:33.111790    1803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2
	I1025 14:10:33.111804    1803 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:10:33.111841    1803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d6:65:0c:16:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/disk.qcow2
	I1025 14:10:33.165984    1803 main.go:141] libmachine: STDOUT: 
	I1025 14:10:33.166016    1803 main.go:141] libmachine: STDERR: 
	I1025 14:10:33.166022    1803 main.go:141] libmachine: Attempt 0
	I1025 14:10:33.166041    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:35.168356    1803 main.go:141] libmachine: Attempt 1
	I1025 14:10:35.168447    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:37.168841    1803 main.go:141] libmachine: Attempt 2
	I1025 14:10:37.168913    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:39.171090    1803 main.go:141] libmachine: Attempt 3
	I1025 14:10:39.171113    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:41.173178    1803 main.go:141] libmachine: Attempt 4
	I1025 14:10:41.173186    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:43.175215    1803 main.go:141] libmachine: Attempt 5
	I1025 14:10:43.175224    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:45.177300    1803 main.go:141] libmachine: Attempt 6
	I1025 14:10:45.177333    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:47.179473    1803 main.go:141] libmachine: Attempt 7
	I1025 14:10:47.179514    1803 main.go:141] libmachine: Searching for a6:d6:65:c:16:f0 in /var/db/dhcpd_leases ...
	I1025 14:10:47.179581    1803 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1025 14:10:47.179607    1803 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x653ad5d6}
	I1025 14:10:47.179617    1803 main.go:141] libmachine: Found match: a6:d6:65:c:16:f0
	I1025 14:10:47.179625    1803 main.go:141] libmachine: IP: 192.168.105.2
	I1025 14:10:47.179631    1803 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1025 14:10:49.202285    1803 machine.go:88] provisioning docker machine ...
	I1025 14:10:49.202346    1803 buildroot.go:166] provisioning hostname "addons-355000"
	I1025 14:10:49.203299    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:49.204057    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:49.204076    1803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-355000 && echo "addons-355000" | sudo tee /etc/hostname
	I1025 14:10:49.289748    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-355000
	
	I1025 14:10:49.289853    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:49.290353    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:49.290378    1803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-355000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-355000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-355000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 14:10:49.360197    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 14:10:49.360219    1803 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-1304/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-1304/.minikube}
	I1025 14:10:49.360232    1803 buildroot.go:174] setting up certificates
	I1025 14:10:49.360241    1803 provision.go:83] configureAuth start
	I1025 14:10:49.360247    1803 provision.go:138] copyHostCerts
	I1025 14:10:49.360425    1803 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem (1082 bytes)
	I1025 14:10:49.360802    1803 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem (1123 bytes)
	I1025 14:10:49.360966    1803 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem (1675 bytes)
	I1025 14:10:49.361091    1803 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem org=jenkins.addons-355000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-355000]
	I1025 14:10:49.547554    1803 provision.go:172] copyRemoteCerts
	I1025 14:10:49.547619    1803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 14:10:49.547631    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:10:49.579059    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 14:10:49.586447    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 14:10:49.593898    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 14:10:49.600917    1803 provision.go:86] duration metric: configureAuth took 240.673ms
	I1025 14:10:49.600927    1803 buildroot.go:189] setting minikube options for container-runtime
	I1025 14:10:49.601035    1803 config.go:182] Loaded profile config "addons-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:10:49.601078    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:49.601297    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:49.601301    1803 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 14:10:49.658982    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 14:10:49.658992    1803 buildroot.go:70] root file system type: tmpfs
	I1025 14:10:49.659055    1803 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 14:10:49.659090    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:49.659325    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:49.659367    1803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 14:10:49.717407    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 14:10:49.717451    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:49.717674    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:49.717684    1803 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 14:10:50.044112    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 14:10:50.044126    1803 machine.go:91] provisioned docker machine in 841.814084ms
	I1025 14:10:50.044132    1803 client.go:171] LocalClient.Create took 17.701514s
	I1025 14:10:50.044144    1803 start.go:167] duration metric: libmachine.API.Create for "addons-355000" took 17.701581042s
	I1025 14:10:50.044149    1803 start.go:300] post-start starting for "addons-355000" (driver="qemu2")
	I1025 14:10:50.044156    1803 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 14:10:50.044225    1803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 14:10:50.044234    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:10:50.074965    1803 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 14:10:50.076236    1803 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 14:10:50.076242    1803 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/addons for local assets ...
	I1025 14:10:50.076311    1803 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/files for local assets ...
	I1025 14:10:50.076337    1803 start.go:303] post-start completed in 32.185083ms
	I1025 14:10:50.076722    1803 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/config.json ...
	I1025 14:10:50.076900    1803 start.go:128] duration metric: createHost completed in 17.973123041s
	I1025 14:10:50.076924    1803 main.go:141] libmachine: Using SSH client type: native
	I1025 14:10:50.077139    1803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050eb0c0] 0x1050ed830 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1025 14:10:50.077147    1803 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1025 14:10:50.131517    1803 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698268249.637972710
	
	I1025 14:10:50.131525    1803 fix.go:206] guest clock: 1698268249.637972710
	I1025 14:10:50.131529    1803 fix.go:219] Guest: 2023-10-25 14:10:49.63797271 -0700 PDT Remote: 2023-10-25 14:10:50.076905 -0700 PDT m=+18.081154543 (delta=-438.93229ms)
	I1025 14:10:50.131540    1803 fix.go:190] guest clock delta is within tolerance: -438.93229ms
	I1025 14:10:50.131542    1803 start.go:83] releasing machines lock for "addons-355000", held for 18.027810625s
	I1025 14:10:50.131844    1803 ssh_runner.go:195] Run: cat /version.json
	I1025 14:10:50.131849    1803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 14:10:50.131854    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:10:50.131879    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:10:50.161417    1803 ssh_runner.go:195] Run: systemctl --version
	I1025 14:10:50.208645    1803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 14:10:50.210681    1803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 14:10:50.210718    1803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 14:10:50.216825    1803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 14:10:50.216834    1803 start.go:472] detecting cgroup driver to use...
	I1025 14:10:50.216971    1803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:10:50.222856    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 14:10:50.226473    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 14:10:50.230257    1803 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 14:10:50.230298    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 14:10:50.233931    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:10:50.236879    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 14:10:50.239670    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:10:50.243014    1803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 14:10:50.246521    1803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 14:10:50.249689    1803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 14:10:50.252316    1803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 14:10:50.255155    1803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:10:50.326965    1803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 14:10:50.335585    1803 start.go:472] detecting cgroup driver to use...
	I1025 14:10:50.335650    1803 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 14:10:50.341580    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:10:50.346226    1803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 14:10:50.356899    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:10:50.361252    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:10:50.365961    1803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 14:10:50.406779    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:10:50.412087    1803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:10:50.417521    1803 ssh_runner.go:195] Run: which cri-dockerd
	I1025 14:10:50.418930    1803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 14:10:50.421964    1803 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 14:10:50.427134    1803 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 14:10:50.503914    1803 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 14:10:50.582766    1803 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 14:10:50.582823    1803 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 14:10:50.588430    1803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:10:50.666349    1803 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:10:51.815927    1803 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.14956475s)
	I1025 14:10:51.815977    1803 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 14:10:51.886888    1803 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 14:10:51.959048    1803 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 14:10:52.030201    1803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:10:52.109554    1803 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 14:10:52.116611    1803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:10:52.194279    1803 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 14:10:52.217839    1803 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 14:10:52.217915    1803 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 14:10:52.220205    1803 start.go:540] Will wait 60s for crictl version
	I1025 14:10:52.220248    1803 ssh_runner.go:195] Run: which crictl
	I1025 14:10:52.221643    1803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 14:10:52.243539    1803 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 14:10:52.243601    1803 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:10:52.253873    1803 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:10:52.269379    1803 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 14:10:52.269510    1803 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1025 14:10:52.270884    1803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:10:52.274814    1803 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:10:52.274857    1803 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:10:52.279945    1803 docker.go:693] Got preloaded images: 
	I1025 14:10:52.279951    1803 docker.go:699] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1025 14:10:52.280019    1803 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:10:52.283174    1803 ssh_runner.go:195] Run: which lz4
	I1025 14:10:52.284427    1803 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1025 14:10:52.285730    1803 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 14:10:52.285740    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357729134 bytes)
	I1025 14:10:53.621777    1803 docker.go:657] Took 1.337358 seconds to copy over tarball
	I1025 14:10:53.621841    1803 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 14:10:54.709000    1803 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087150334s)
	I1025 14:10:54.709012    1803 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 14:10:54.724441    1803 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:10:54.727516    1803 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1025 14:10:54.732812    1803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:10:54.798242    1803 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:10:57.308486    1803 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5102395s)
	I1025 14:10:57.308571    1803 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:10:57.314548    1803 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 14:10:57.314558    1803 cache_images.go:84] Images are preloaded, skipping loading
	I1025 14:10:57.314630    1803 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 14:10:57.322103    1803 cni.go:84] Creating CNI manager for ""
	I1025 14:10:57.322113    1803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:10:57.322138    1803 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 14:10:57.322147    1803 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-355000 NodeName:addons-355000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 14:10:57.322213    1803 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-355000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 14:10:57.322246    1803 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-355000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-355000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 14:10:57.322300    1803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 14:10:57.325595    1803 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 14:10:57.325640    1803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 14:10:57.328198    1803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1025 14:10:57.333145    1803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 14:10:57.338138    1803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1025 14:10:57.343292    1803 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I1025 14:10:57.344485    1803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:10:57.347881    1803 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000 for IP: 192.168.105.2
	I1025 14:10:57.347890    1803 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b24ebfb6727e8dcf7d0828ec4a3e725ccc80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.348034    1803 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key
	I1025 14:10:57.537837    1803 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt ...
	I1025 14:10:57.537843    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt: {Name:mk6eafb9ebe82c9ffc2b8bf89d1d9ba5c96a5309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.538088    1803 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key ...
	I1025 14:10:57.538091    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key: {Name:mk294da349c3bd20d6ef1b19c810317a3e802bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.538210    1803 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key
	I1025 14:10:57.797882    1803 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt ...
	I1025 14:10:57.797890    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt: {Name:mk76855529ce3b9c35f99405ac9a9fbf6c066f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.798099    1803 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key ...
	I1025 14:10:57.798103    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key: {Name:mk57031984bad282b1105605b42c14a4c14270ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.798253    1803 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.key
	I1025 14:10:57.798260    1803 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt with IP's: []
	I1025 14:10:57.930114    1803 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt ...
	I1025 14:10:57.930123    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: {Name:mkda882066b9c5787cf6fdbdfb68fdc295044f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.930323    1803 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.key ...
	I1025 14:10:57.930327    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.key: {Name:mk556a09f1addf275e274626a739bc2a1ff350bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.930442    1803 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key.96055969
	I1025 14:10:57.930451    1803 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 14:10:57.992432    1803 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt.96055969 ...
	I1025 14:10:57.992435    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt.96055969: {Name:mke5aa0f994369606d79d20ebc584b7d3d8ef097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.992560    1803 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key.96055969 ...
	I1025 14:10:57.992563    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key.96055969: {Name:mkd2ce326fc535435ef5c7173e28d059139d26c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:57.992662    1803 certs.go:337] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt
	I1025 14:10:57.992880    1803 certs.go:341] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key
	I1025 14:10:57.993000    1803 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.key
	I1025 14:10:57.993011    1803 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.crt with IP's: []
	I1025 14:10:58.040225    1803 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.crt ...
	I1025 14:10:58.040230    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.crt: {Name:mk117c11896a3e381eefceadc25abe0ac042064f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:58.040363    1803 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.key ...
	I1025 14:10:58.040366    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.key: {Name:mk69670252cb3514c6920725a20a1c24a5c73f5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:58.040576    1803 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 14:10:58.040597    1803 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem (1082 bytes)
	I1025 14:10:58.040615    1803 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem (1123 bytes)
	I1025 14:10:58.040632    1803 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem (1675 bytes)
	I1025 14:10:58.040975    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 14:10:58.048232    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 14:10:58.054837    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 14:10:58.062194    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 14:10:58.069212    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 14:10:58.075976    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 14:10:58.082955    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 14:10:58.090159    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 14:10:58.096763    1803 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 14:10:58.103196    1803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 14:10:58.108931    1803 ssh_runner.go:195] Run: openssl version
	I1025 14:10:58.110877    1803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 14:10:58.114322    1803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:10:58.115812    1803 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:10:58.115832    1803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:10:58.117683    1803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 14:10:58.120617    1803 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 14:10:58.121908    1803 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 14:10:58.121946    1803 kubeadm.go:404] StartCluster: {Name:addons-355000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:addons-355000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:58.122015    1803 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 14:10:58.127360    1803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 14:10:58.130837    1803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 14:10:58.133827    1803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 14:10:58.136609    1803 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 14:10:58.136625    1803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 14:10:58.158579    1803 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 14:10:58.158603    1803 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 14:10:58.215638    1803 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 14:10:58.215697    1803 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 14:10:58.215751    1803 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 14:10:58.312990    1803 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 14:10:58.322166    1803 out.go:204]   - Generating certificates and keys ...
	I1025 14:10:58.322204    1803 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 14:10:58.322239    1803 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 14:10:58.365187    1803 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 14:10:58.438349    1803 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 14:10:58.634304    1803 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 14:10:58.732971    1803 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 14:10:58.809302    1803 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 14:10:58.809369    1803 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-355000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1025 14:10:58.875082    1803 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 14:10:58.875143    1803 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-355000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1025 14:10:58.923244    1803 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 14:10:58.980841    1803 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 14:10:59.098493    1803 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 14:10:59.098529    1803 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 14:10:59.191265    1803 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 14:10:59.246018    1803 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 14:10:59.282542    1803 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 14:10:59.434670    1803 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 14:10:59.434895    1803 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 14:10:59.436291    1803 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 14:10:59.442562    1803 out.go:204]   - Booting up control plane ...
	I1025 14:10:59.442641    1803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 14:10:59.442683    1803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 14:10:59.442716    1803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 14:10:59.444074    1803 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 14:10:59.444618    1803 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 14:10:59.444645    1803 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 14:10:59.514040    1803 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 14:11:03.015409    1803 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501397 seconds
	I1025 14:11:03.015495    1803 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 14:11:03.020099    1803 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 14:11:03.543589    1803 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 14:11:03.543687    1803 kubeadm.go:322] [mark-control-plane] Marking the node addons-355000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 14:11:04.049783    1803 kubeadm.go:322] [bootstrap-token] Using token: l6b1g2.p1qp3khc8wgerrys
	I1025 14:11:04.053359    1803 out.go:204]   - Configuring RBAC rules ...
	I1025 14:11:04.053420    1803 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 14:11:04.062435    1803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 14:11:04.064991    1803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 14:11:04.066035    1803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 14:11:04.067136    1803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 14:11:04.068254    1803 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 14:11:04.072476    1803 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 14:11:04.233885    1803 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 14:11:04.464588    1803 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 14:11:04.464901    1803 kubeadm.go:322] 
	I1025 14:11:04.464932    1803 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 14:11:04.464936    1803 kubeadm.go:322] 
	I1025 14:11:04.464971    1803 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 14:11:04.464978    1803 kubeadm.go:322] 
	I1025 14:11:04.464989    1803 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 14:11:04.465033    1803 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 14:11:04.465060    1803 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 14:11:04.465062    1803 kubeadm.go:322] 
	I1025 14:11:04.465118    1803 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 14:11:04.465124    1803 kubeadm.go:322] 
	I1025 14:11:04.465153    1803 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 14:11:04.465157    1803 kubeadm.go:322] 
	I1025 14:11:04.465183    1803 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 14:11:04.465225    1803 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 14:11:04.465254    1803 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 14:11:04.465258    1803 kubeadm.go:322] 
	I1025 14:11:04.465306    1803 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 14:11:04.465348    1803 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 14:11:04.465354    1803 kubeadm.go:322] 
	I1025 14:11:04.465401    1803 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l6b1g2.p1qp3khc8wgerrys \
	I1025 14:11:04.465455    1803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 \
	I1025 14:11:04.465465    1803 kubeadm.go:322] 	--control-plane 
	I1025 14:11:04.465467    1803 kubeadm.go:322] 
	I1025 14:11:04.465515    1803 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 14:11:04.465522    1803 kubeadm.go:322] 
	I1025 14:11:04.465563    1803 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l6b1g2.p1qp3khc8wgerrys \
	I1025 14:11:04.465622    1803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 
	I1025 14:11:04.465691    1803 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 14:11:04.465698    1803 cni.go:84] Creating CNI manager for ""
	I1025 14:11:04.465708    1803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:11:04.473344    1803 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 14:11:04.477404    1803 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 14:11:04.480492    1803 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 14:11:04.485096    1803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 14:11:04.485138    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:04.485147    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=addons-355000 minikube.k8s.io/updated_at=2023_10_25T14_11_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:04.554890    1803 ops.go:34] apiserver oom_adj: -16
	I1025 14:11:04.554925    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:04.587526    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:05.122404    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:05.622389    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:06.122433    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:06.622417    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:07.122432    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:07.622408    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:08.122158    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:08.622443    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:09.122449    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:09.622429    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:10.122401    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:10.622381    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:11.122388    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:11.622408    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:12.122337    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:12.622364    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:13.122337    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:13.622402    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:14.122382    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:14.622326    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:15.122328    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:15.622346    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:16.121993    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:16.622363    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:17.122306    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:17.621538    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:18.122316    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:18.622325    1803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:11:18.656840    1803 kubeadm.go:1081] duration metric: took 14.171808791s to wait for elevateKubeSystemPrivileges.
	I1025 14:11:18.656855    1803 kubeadm.go:406] StartCluster complete in 20.535018208s
	I1025 14:11:18.656866    1803 settings.go:142] acquiring lock: {Name:mka8243895d2abf46689bcbcc2c73a1efa650151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:11:18.657019    1803 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:11:18.657214    1803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/kubeconfig: {Name:mkdc8e211286b196dbaba95cec2e4580798673af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:11:18.657445    1803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 14:11:18.657541    1803 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1025 14:11:18.657584    1803 addons.go:69] Setting volumesnapshots=true in profile "addons-355000"
	I1025 14:11:18.657590    1803 addons.go:231] Setting addon volumesnapshots=true in "addons-355000"
	I1025 14:11:18.657597    1803 addons.go:69] Setting ingress-dns=true in profile "addons-355000"
	I1025 14:11:18.657604    1803 addons.go:231] Setting addon ingress-dns=true in "addons-355000"
	I1025 14:11:18.657607    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.657624    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.657626    1803 addons.go:69] Setting default-storageclass=true in profile "addons-355000"
	I1025 14:11:18.657634    1803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-355000"
	I1025 14:11:18.657669    1803 addons.go:69] Setting inspektor-gadget=true in profile "addons-355000"
	I1025 14:11:18.657677    1803 addons.go:231] Setting addon inspektor-gadget=true in "addons-355000"
	I1025 14:11:18.657687    1803 addons.go:69] Setting storage-provisioner=true in profile "addons-355000"
	I1025 14:11:18.657697    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.657701    1803 addons.go:69] Setting registry=true in profile "addons-355000"
	I1025 14:11:18.657704    1803 addons.go:231] Setting addon registry=true in "addons-355000"
	I1025 14:11:18.657793    1803 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-355000"
	I1025 14:11:18.657800    1803 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-355000"
	I1025 14:11:18.657811    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.657809    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.657903    1803 config.go:182] Loaded profile config "addons-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:11:18.657697    1803 addons.go:231] Setting addon storage-provisioner=true in "addons-355000"
	I1025 14:11:18.657975    1803 addons.go:69] Setting gcp-auth=true in profile "addons-355000"
	I1025 14:11:18.657981    1803 mustload.go:65] Loading cluster: addons-355000
	I1025 14:11:18.657991    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.658033    1803 retry.go:31] will retry after 796.734814ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658044    1803 addons.go:69] Setting metrics-server=true in profile "addons-355000"
	I1025 14:11:18.658047    1803 addons.go:231] Setting addon metrics-server=true in "addons-355000"
	I1025 14:11:18.658050    1803 config.go:182] Loaded profile config "addons-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:11:18.658056    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.658175    1803 retry.go:31] will retry after 825.023421ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658181    1803 addons.go:69] Setting ingress=true in profile "addons-355000"
	I1025 14:11:18.658184    1803 addons.go:231] Setting addon ingress=true in "addons-355000"
	I1025 14:11:18.658190    1803 retry.go:31] will retry after 1.03350723s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658179    1803 retry.go:31] will retry after 944.303558ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658202    1803 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-355000"
	I1025 14:11:18.658195    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.658213    1803 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-355000"
	I1025 14:11:18.658225    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.658242    1803 retry.go:31] will retry after 1.035456787s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658198    1803 addons.go:69] Setting cloud-spanner=true in profile "addons-355000"
	I1025 14:11:18.658250    1803 addons.go:231] Setting addon cloud-spanner=true in "addons-355000"
	I1025 14:11:18.658253    1803 retry.go:31] will retry after 1.277787008s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658261    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:18.658334    1803 retry.go:31] will retry after 632.588682ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658417    1803 retry.go:31] will retry after 1.215273536s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658198    1803 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-355000"
	I1025 14:11:18.658447    1803 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-355000"
	I1025 14:11:18.658450    1803 retry.go:31] will retry after 686.436242ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658495    1803 retry.go:31] will retry after 514.081108ms: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.658565    1803 retry.go:31] will retry after 1.250131711s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/monitor: connect: connection refused
	I1025 14:11:18.663229    1803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1025 14:11:18.671120    1803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1025 14:11:18.667315    1803 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1025 14:11:18.674322    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1025 14:11:18.674334    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:18.674505    1803 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 14:11:18.674511    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 14:11:18.674516    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:18.679627    1803 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-355000" context rescaled to 1 replicas
	I1025 14:11:18.679646    1803 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:11:18.685250    1803 out.go:177] * Verifying Kubernetes components...
	I1025 14:11:18.694247    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 14:11:18.755908    1803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 14:11:18.756282    1803 node_ready.go:35] waiting up to 6m0s for node "addons-355000" to be "Ready" ...
	I1025 14:11:18.758295    1803 node_ready.go:49] node "addons-355000" has status "Ready":"True"
	I1025 14:11:18.758301    1803 node_ready.go:38] duration metric: took 2.01075ms waiting for node "addons-355000" to be "Ready" ...
	I1025 14:11:18.758305    1803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 14:11:18.762038    1803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hkh4d" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:18.767373    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 14:11:18.773671    1803 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1025 14:11:18.773683    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1025 14:11:18.789242    1803 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1025 14:11:18.789253    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1025 14:11:18.809231    1803 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1025 14:11:18.809243    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1025 14:11:18.815075    1803 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1025 14:11:18.815087    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1025 14:11:18.834532    1803 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1025 14:11:18.834544    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1025 14:11:18.944829    1803 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 14:11:18.944841    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1025 14:11:18.995993    1803 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 14:11:18.996003    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1025 14:11:19.008705    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1025 14:11:19.177731    1803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1025 14:11:19.180611    1803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 14:11:19.184652    1803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 14:11:19.187786    1803 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 14:11:19.187794    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1025 14:11:19.187804    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.297609    1803 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 14:11:19.301522    1803 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1025 14:11:19.305556    1803 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 14:11:19.305570    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1025 14:11:19.305582    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.350469    1803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1025 14:11:19.354519    1803 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1025 14:11:19.354527    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 14:11:19.354536    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.354942    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 14:11:19.436256    1803 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 14:11:19.436267    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 14:11:19.460588    1803 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1025 14:11:19.464599    1803 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 14:11:19.464609    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 14:11:19.464621    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.470220    1803 start.go:926] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1025 14:11:19.483750    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:19.484646    1803 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 14:11:19.484652    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 14:11:19.500318    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 14:11:19.504757    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 14:11:19.524381    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 14:11:19.605506    1803 addons.go:231] Setting addon default-storageclass=true in "addons-355000"
	I1025 14:11:19.605526    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:19.606242    1803 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 14:11:19.606248    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 14:11:19.606255    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.697121    1803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:11:19.701186    1803 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:11:19.701197    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 14:11:19.701208    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.706047    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 14:11:19.710108    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 14:11:19.710119    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 14:11:19.710130    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.721026    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 14:11:19.774250    1803 pod_ready.go:92] pod "coredns-5dd5756b68-hkh4d" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:19.774258    1803 pod_ready.go:81] duration metric: took 1.012217708s waiting for pod "coredns-5dd5756b68-hkh4d" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:19.774263    1803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:19.875773    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:11:19.879801    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 14:11:19.887100    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 14:11:19.894782    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 14:11:19.904968    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 14:11:19.908936    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 14:11:19.910000    1803 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-355000"
	I1025 14:11:19.912007    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 14:11:19.912023    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:19.917968    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 14:11:19.928908    1803 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 14:11:19.924957    1803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 14:11:19.936960    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 14:11:19.940958    1803 out.go:177]   - Using image docker.io/busybox:stable
	I1025 14:11:19.937053    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 14:11:19.944853    1803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1025 14:11:19.940979    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.948964    1803 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 14:11:19.948969    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 14:11:19.948979    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:19.949051    1803 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 14:11:19.949063    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 14:11:19.949076    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:20.014682    1803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 14:11:20.014692    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 14:11:20.081862    1803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 14:11:20.081872    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 14:11:20.140386    1803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 14:11:20.140399    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 14:11:20.145652    1803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 14:11:20.145661    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 14:11:20.155040    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 14:11:20.155053    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 14:11:20.166148    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 14:11:20.166159    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 14:11:20.175517    1803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 14:11:20.175528    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 14:11:20.182966    1803 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 14:11:20.182977    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 14:11:20.249159    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.240441459s)
	I1025 14:11:20.249196    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.481820792s)
	I1025 14:11:20.265411    1803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 14:11:20.265421    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 14:11:20.270881    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 14:11:20.328120    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 14:11:20.329396    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 14:11:20.329403    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 14:11:20.344214    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 14:11:20.344225    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 14:11:20.371292    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 14:11:20.371306    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 14:11:20.374901    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 14:11:20.563066    1803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 14:11:20.563084    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 14:11:20.750803    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 14:11:20.750814    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 14:11:20.793102    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 14:11:20.793113    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 14:11:20.903528    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 14:11:20.903542    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 14:11:21.003402    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 14:11:21.003413    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 14:11:21.061437    1803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 14:11:21.061450    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 14:11:21.114564    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 14:11:21.781545    1803 pod_ready.go:102] pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status "Ready":"False"
	I1025 14:11:22.017039    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.516720041s)
	I1025 14:11:22.017039    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.66209625s)
	I1025 14:11:22.017061    1803 addons.go:467] Verifying addon ingress=true in "addons-355000"
	I1025 14:11:22.021559    1803 out.go:177] * Verifying ingress addon...
	I1025 14:11:22.017105    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.512351375s)
	I1025 14:11:22.017121    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.492739917s)
	I1025 14:11:22.017162    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.2961405s)
	I1025 14:11:22.017193    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.141420166s)
	I1025 14:11:22.017249    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.746366375s)
	I1025 14:11:22.017285    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.689160583s)
	I1025 14:11:22.017302    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.6424005s)
	W1025 14:11:22.021641    1803 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 14:11:22.021641    1803 addons.go:467] Verifying addon registry=true in "addons-355000"
	I1025 14:11:22.034491    1803 retry.go:31] will retry after 368.386919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 14:11:22.021682    1803 addons.go:467] Verifying addon metrics-server=true in "addons-355000"
	I1025 14:11:22.038498    1803 out.go:177] * Verifying registry addon...
	I1025 14:11:22.034766    1803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 14:11:22.047847    1803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1025 14:11:22.048789    1803 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 14:11:22.054109    1803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 14:11:22.054118    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:22.054603    1803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 14:11:22.054607    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:22.056975    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:22.057366    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:22.404992    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 14:11:22.560405    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:22.561368    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:22.890284    1803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.775703083s)
	I1025 14:11:22.890303    1803 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-355000"
	I1025 14:11:22.904245    1803 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 14:11:22.914648    1803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 14:11:22.921975    1803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 14:11:22.921984    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:22.934440    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:23.090961    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:23.091863    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:23.440045    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:23.562785    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:23.562956    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:23.783594    1803 pod_ready.go:102] pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status "Ready":"False"
	I1025 14:11:23.939357    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:24.062589    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:24.062762    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:24.440056    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:24.560894    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:24.564347    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:24.939505    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:25.061991    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:25.061994    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:25.439742    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:25.565526    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:25.565559    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:25.869342    1803 pod_ready.go:102] pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status "Ready":"False"
	I1025 14:11:25.939371    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:26.061616    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:26.062222    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:26.090182    1803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 14:11:26.090196    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:26.120640    1803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 14:11:26.126002    1803 addons.go:231] Setting addon gcp-auth=true in "addons-355000"
	I1025 14:11:26.126025    1803 host.go:66] Checking if "addons-355000" exists ...
	I1025 14:11:26.126833    1803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 14:11:26.126844    1803 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/addons-355000/id_rsa Username:docker}
	I1025 14:11:26.160166    1803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1025 14:11:26.163237    1803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1025 14:11:26.166275    1803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 14:11:26.166280    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 14:11:26.171008    1803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 14:11:26.171015    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 14:11:26.175705    1803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 14:11:26.175711    1803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1025 14:11:26.182144    1803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 14:11:26.439429    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:26.546081    1803 addons.go:467] Verifying addon gcp-auth=true in "addons-355000"
	I1025 14:11:26.550561    1803 out.go:177] * Verifying gcp-auth addon...
	I1025 14:11:26.557925    1803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 14:11:26.562465    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:26.563147    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:26.563370    1803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 14:11:26.563381    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:26.565861    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:27.008495    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:27.061405    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:27.061643    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:27.068798    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:27.439020    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:27.561201    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:27.561376    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:27.568557    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:27.937910    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:28.062358    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:28.062489    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:28.068165    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:28.283171    1803 pod_ready.go:102] pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status "Ready":"False"
	I1025 14:11:28.441125    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:28.561508    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:28.561608    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:28.568272    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:28.939284    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:29.061453    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:29.061565    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:29.068329    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:29.439549    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:29.561117    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:29.561641    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:29.568400    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:29.784566    1803 pod_ready.go:97] pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-25 14:11:17 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-25 14:11:19 -0700 PDT,FinishedAt:2023-10-25 14:11:29 -0700 PDT,ContainerID:docker://e736f78db3153ad7a65e001f766555e234f733c2fcfa82a79853c760f2daacb8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://e736f78db3153ad7a65e001f766555e234f733c2fcfa82a79853c760f2daacb8 Started:0x14002bd5ef0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1025 14:11:29.784578    1803 pod_ready.go:81] duration metric: took 10.010364083s waiting for pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace to be "Ready" ...
	E1025 14:11:29.784583    1803 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-wvvth" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:11:17 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-25 14:11:17 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-25 14:11:19 -0700 PDT,FinishedAt:2023-10-25 14:11:29 -0700 PDT,ContainerID:docker://e736f78db3153ad7a65e001f766555e234f733c2fcfa82a79853c760f2daacb8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://e736f78db3153ad7a65e001f766555e234f733c2fcfa82a79853c760f2daacb8 Started:0x14002bd5ef0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1025 14:11:29.784587    1803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.787067    1803 pod_ready.go:92] pod "etcd-addons-355000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:29.787073    1803 pod_ready.go:81] duration metric: took 2.483333ms waiting for pod "etcd-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.787077    1803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.790118    1803 pod_ready.go:92] pod "kube-apiserver-addons-355000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:29.790126    1803 pod_ready.go:81] duration metric: took 3.046708ms waiting for pod "kube-apiserver-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.790130    1803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.792733    1803 pod_ready.go:92] pod "kube-controller-manager-addons-355000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:29.792739    1803 pod_ready.go:81] duration metric: took 2.60475ms waiting for pod "kube-controller-manager-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.792743    1803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99n79" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.794941    1803 pod_ready.go:92] pod "kube-proxy-99n79" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:29.794945    1803 pod_ready.go:81] duration metric: took 2.199875ms waiting for pod "kube-proxy-99n79" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.794949    1803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:29.939494    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:30.061343    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:30.061439    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:30.068269    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:30.184426    1803 pod_ready.go:92] pod "kube-scheduler-addons-355000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:11:30.184437    1803 pod_ready.go:81] duration metric: took 389.487709ms waiting for pod "kube-scheduler-addons-355000" in "kube-system" namespace to be "Ready" ...
	I1025 14:11:30.184440    1803 pod_ready.go:38] duration metric: took 11.4261895s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 14:11:30.184450    1803 api_server.go:52] waiting for apiserver process to appear ...
	I1025 14:11:30.184515    1803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 14:11:30.189985    1803 api_server.go:72] duration metric: took 11.51038675s to wait for apiserver process to appear ...
	I1025 14:11:30.189994    1803 api_server.go:88] waiting for apiserver healthz status ...
	I1025 14:11:30.190002    1803 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I1025 14:11:30.193583    1803 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I1025 14:11:30.194236    1803 api_server.go:141] control plane version: v1.28.3
	I1025 14:11:30.194242    1803 api_server.go:131] duration metric: took 4.24525ms to wait for apiserver health ...
	I1025 14:11:30.194246    1803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 14:11:30.388113    1803 system_pods.go:59] 18 kube-system pods found
	I1025 14:11:30.388126    1803 system_pods.go:61] "coredns-5dd5756b68-hkh4d" [a965a56a-37c9-4005-b715-8ef641285070] Running
	I1025 14:11:30.388130    1803 system_pods.go:61] "coredns-5dd5756b68-wvvth" [8e7369f3-ba7e-436f-b59b-3a93da19bef1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1025 14:11:30.388134    1803 system_pods.go:61] "csi-hostpath-attacher-0" [09776c38-a2e7-4b91-ae0a-7d807336c22c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 14:11:30.388137    1803 system_pods.go:61] "csi-hostpath-resizer-0" [d53eb220-af7e-4dd9-9cf9-fa07cee80950] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 14:11:30.388140    1803 system_pods.go:61] "csi-hostpathplugin-jzs76" [adc76faf-4e57-4156-ba81-09507f13566e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 14:11:30.388144    1803 system_pods.go:61] "etcd-addons-355000" [ece42ab1-f425-4d23-8815-c6f2446f1f8b] Running
	I1025 14:11:30.388146    1803 system_pods.go:61] "kube-apiserver-addons-355000" [5d5db9d6-0f5e-46d4-a6bf-f3a6570ffb48] Running
	I1025 14:11:30.388148    1803 system_pods.go:61] "kube-controller-manager-addons-355000" [95e1632d-98d5-4ddf-bab1-81f2dc320d9d] Running
	I1025 14:11:30.388152    1803 system_pods.go:61] "kube-ingress-dns-minikube" [47767f94-eee8-477e-a5a7-32ab60af3ad5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 14:11:30.388155    1803 system_pods.go:61] "kube-proxy-99n79" [95c265cb-fc5d-4959-a3b2-49b144c35670] Running
	I1025 14:11:30.388157    1803 system_pods.go:61] "kube-scheduler-addons-355000" [0f2b794d-b457-4922-8e26-54a1e92ef2d9] Running
	I1025 14:11:30.388159    1803 system_pods.go:61] "metrics-server-7c66d45ddc-25r7x" [85efce74-c581-409e-9d09-038930b453b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 14:11:30.388164    1803 system_pods.go:61] "nvidia-device-plugin-daemonset-wsrgl" [bd51af8f-ca72-4bf7-b1d8-6a39f9994c22] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 14:11:30.388167    1803 system_pods.go:61] "registry-8n8vt" [cae3a588-d76d-4af2-a97e-8e37b78c04b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 14:11:30.388170    1803 system_pods.go:61] "registry-proxy-rxr5v" [1c04e71c-98ce-4108-9506-d0880f28a3e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 14:11:30.388173    1803 system_pods.go:61] "snapshot-controller-58dbcc7b99-2bcjr" [9ba488cb-f853-4108-a5da-b8f9a3e4c893] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 14:11:30.388176    1803 system_pods.go:61] "snapshot-controller-58dbcc7b99-vvgvt" [a20d527c-1730-49e3-9964-f0ff9cba297e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 14:11:30.388178    1803 system_pods.go:61] "storage-provisioner" [66f15313-2514-48fb-9ede-66c1e7046729] Running
	I1025 14:11:30.388181    1803 system_pods.go:74] duration metric: took 193.933708ms to wait for pod list to return data ...
	I1025 14:11:30.388185    1803 default_sa.go:34] waiting for default service account to be created ...
	I1025 14:11:30.439513    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:30.680391    1803 default_sa.go:45] found service account: "default"
	I1025 14:11:30.680405    1803 default_sa.go:55] duration metric: took 292.218625ms for default service account to be created ...
	I1025 14:11:30.680411    1803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 14:11:30.681073    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:30.681141    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:30.681196    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:30.788206    1803 system_pods.go:86] 17 kube-system pods found
	I1025 14:11:30.788219    1803 system_pods.go:89] "coredns-5dd5756b68-hkh4d" [a965a56a-37c9-4005-b715-8ef641285070] Running
	I1025 14:11:30.788224    1803 system_pods.go:89] "csi-hostpath-attacher-0" [09776c38-a2e7-4b91-ae0a-7d807336c22c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 14:11:30.788228    1803 system_pods.go:89] "csi-hostpath-resizer-0" [d53eb220-af7e-4dd9-9cf9-fa07cee80950] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 14:11:30.788232    1803 system_pods.go:89] "csi-hostpathplugin-jzs76" [adc76faf-4e57-4156-ba81-09507f13566e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 14:11:30.788234    1803 system_pods.go:89] "etcd-addons-355000" [ece42ab1-f425-4d23-8815-c6f2446f1f8b] Running
	I1025 14:11:30.788237    1803 system_pods.go:89] "kube-apiserver-addons-355000" [5d5db9d6-0f5e-46d4-a6bf-f3a6570ffb48] Running
	I1025 14:11:30.788240    1803 system_pods.go:89] "kube-controller-manager-addons-355000" [95e1632d-98d5-4ddf-bab1-81f2dc320d9d] Running
	I1025 14:11:30.788243    1803 system_pods.go:89] "kube-ingress-dns-minikube" [47767f94-eee8-477e-a5a7-32ab60af3ad5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 14:11:30.788246    1803 system_pods.go:89] "kube-proxy-99n79" [95c265cb-fc5d-4959-a3b2-49b144c35670] Running
	I1025 14:11:30.788248    1803 system_pods.go:89] "kube-scheduler-addons-355000" [0f2b794d-b457-4922-8e26-54a1e92ef2d9] Running
	I1025 14:11:30.788251    1803 system_pods.go:89] "metrics-server-7c66d45ddc-25r7x" [85efce74-c581-409e-9d09-038930b453b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 14:11:30.788255    1803 system_pods.go:89] "nvidia-device-plugin-daemonset-wsrgl" [bd51af8f-ca72-4bf7-b1d8-6a39f9994c22] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 14:11:30.788258    1803 system_pods.go:89] "registry-8n8vt" [cae3a588-d76d-4af2-a97e-8e37b78c04b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 14:11:30.788262    1803 system_pods.go:89] "registry-proxy-rxr5v" [1c04e71c-98ce-4108-9506-d0880f28a3e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 14:11:30.788266    1803 system_pods.go:89] "snapshot-controller-58dbcc7b99-2bcjr" [9ba488cb-f853-4108-a5da-b8f9a3e4c893] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 14:11:30.788270    1803 system_pods.go:89] "snapshot-controller-58dbcc7b99-vvgvt" [a20d527c-1730-49e3-9964-f0ff9cba297e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 14:11:30.788272    1803 system_pods.go:89] "storage-provisioner" [66f15313-2514-48fb-9ede-66c1e7046729] Running
	I1025 14:11:30.788275    1803 system_pods.go:126] duration metric: took 107.86175ms to wait for k8s-apps to be running ...
	I1025 14:11:30.788280    1803 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 14:11:30.788333    1803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 14:11:30.793441    1803 system_svc.go:56] duration metric: took 5.159208ms WaitForService to wait for kubelet.
	I1025 14:11:30.793448    1803 kubeadm.go:581] duration metric: took 12.113855084s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 14:11:30.793458    1803 node_conditions.go:102] verifying NodePressure condition ...
	I1025 14:11:30.939041    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:30.983977    1803 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1025 14:11:30.983986    1803 node_conditions.go:123] node cpu capacity is 2
	I1025 14:11:30.983993    1803 node_conditions.go:105] duration metric: took 190.531167ms to run NodePressure ...
	I1025 14:11:30.983998    1803 start.go:228] waiting for startup goroutines ...
	I1025 14:11:31.060302    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:31.060357    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:31.068776    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:31.439268    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:31.561416    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:31.561544    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:31.568679    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:31.938984    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:32.061811    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:32.061910    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:32.068203    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:32.439053    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:32.560498    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 14:11:32.560588    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:32.568573    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:32.940276    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:33.061392    1803 kapi.go:107] duration metric: took 11.01360275s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 14:11:33.061455    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:33.067582    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:33.439309    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:33.561428    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:33.568719    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:33.939229    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:34.061194    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:34.068621    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:34.439345    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:34.561455    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:34.568449    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:34.939384    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:35.061071    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:35.068313    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:35.439219    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:35.561283    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:35.568499    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:35.939390    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:36.061453    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:36.068260    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:36.439348    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:36.561305    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:36.568480    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:36.998958    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:37.060816    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:37.068779    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:37.439202    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:37.561354    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:37.568417    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:37.941572    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:38.061854    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:38.068235    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:38.439048    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:38.561176    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:38.567494    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:38.939353    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:39.061285    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:39.070654    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:39.439540    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:39.561489    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:39.568558    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:39.939579    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:40.061994    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:40.068843    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:40.439053    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:40.560801    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:40.567216    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:40.940838    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:41.061286    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:41.068313    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:41.439810    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:41.561105    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:41.568300    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:41.939215    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:42.061513    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:42.068535    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:42.439337    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:42.560835    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:42.569576    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:42.939295    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:43.061224    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:43.069072    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:43.439447    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:43.561285    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:43.568307    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:43.981547    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:44.059115    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:44.069064    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:44.439334    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:44.561296    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:44.568479    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:44.939190    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:45.061117    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:45.068578    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:45.439181    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:45.561380    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:45.568320    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:45.939383    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:46.061292    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:46.068255    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:46.439004    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:46.560832    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:46.568423    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:46.939209    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:47.060883    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:47.068342    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:47.445303    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:47.561198    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:47.568369    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:47.939960    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:48.061008    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:48.069020    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:48.439311    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:48.561104    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:48.568861    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:48.939617    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:49.060962    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:49.068634    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:49.438847    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:49.561121    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:49.568506    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:49.939161    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:50.061116    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:50.068197    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:50.439256    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:50.559774    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:50.569366    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:50.939117    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:51.061264    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:51.068526    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:51.439037    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:51.561018    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:51.570753    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:51.938948    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:52.059355    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:52.068996    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:52.438988    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:52.560887    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:52.568433    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:52.939349    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:53.060916    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:53.068359    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:53.438774    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:53.561124    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:53.568276    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:53.939323    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:54.061072    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:54.068472    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:54.437552    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:54.560894    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:54.568237    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:54.939053    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:55.060835    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:55.068461    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:55.439139    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:55.561437    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:55.568771    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:55.939290    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:56.060989    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:56.068209    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:56.439067    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:56.560448    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:56.568543    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:56.939067    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:57.060901    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:57.068463    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:57.439221    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:57.560855    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:57.567241    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:57.937670    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:58.061017    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:58.068364    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:58.439150    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:58.561031    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:58.568995    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:58.939419    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:59.060795    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:59.068356    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:59.439187    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:11:59.561370    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:11:59.568112    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:11:59.939426    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:00.061509    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:00.068188    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:00.439141    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:00.560983    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:00.568366    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:00.939300    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:01.061214    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:01.068909    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:01.439020    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:01.560887    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:01.568332    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:01.939085    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:02.061032    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:02.068461    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:02.439128    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:02.558924    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:02.568807    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:02.939343    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:03.058794    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:03.069182    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:03.439416    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:03.561295    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:03.568248    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:03.939052    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:04.061062    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:04.068449    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:04.439087    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:04.562277    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:04.567983    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:04.939923    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:05.061378    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:05.068213    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:05.439249    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:05.560895    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:05.568376    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:05.939250    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:06.061735    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:06.069313    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:06.439138    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:06.561322    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:06.568265    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:06.939015    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:07.060805    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:07.068214    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:07.439077    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:07.560929    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:07.568367    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:07.939196    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:08.060793    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:08.068722    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:08.438946    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:08.561158    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:08.568689    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:08.939071    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:09.061546    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:09.068186    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:09.439616    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:09.559240    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:09.568925    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:09.939179    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:10.061031    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:10.068215    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:10.439118    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:10.560962    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:10.568358    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:10.938686    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:11.061034    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:11.068407    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:11.438948    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:11.560776    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:11.568283    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:11.939200    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:12.061157    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:12.068432    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:12.439251    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:12.561036    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:12.568444    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:12.946469    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:13.060972    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:13.069086    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:13.438858    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:13.560652    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:13.568292    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:13.939641    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 14:12:14.062875    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:14.068013    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:14.439211    1803 kapi.go:107] duration metric: took 51.524832375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 14:12:14.560676    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:14.568202    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:15.061344    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:15.068205    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:15.561059    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:15.568195    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:16.060767    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:16.068218    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:16.560902    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:16.568200    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:17.061609    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:17.068266    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:17.560867    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:17.568404    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:18.061358    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:18.068239    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:18.561026    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:18.568138    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:19.061548    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:19.068094    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:19.560926    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:19.567559    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:20.061198    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:20.068267    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:20.560987    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:20.568160    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:21.061145    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:21.068225    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:21.560795    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:21.568247    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:22.061026    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:22.068082    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:22.561227    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:22.568081    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:23.060959    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:23.068226    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:23.560936    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:23.568069    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:24.061030    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:24.068227    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:24.560180    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:24.568192    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:25.061444    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:25.068119    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:25.561005    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:25.568038    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:26.061399    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:26.069348    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:26.560686    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:26.568189    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:27.061399    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:27.068683    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:27.560991    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:27.568423    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:28.061314    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:28.068139    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:28.560752    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:28.568705    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:29.061272    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:29.068124    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:29.560814    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:29.568185    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:30.060824    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:30.068600    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:30.561674    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:30.568207    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:31.061260    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:31.068021    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:31.560902    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:31.568187    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:32.061102    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:32.068018    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:32.560914    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:32.568151    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:33.061223    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:33.068742    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:33.560747    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:33.568245    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:34.061264    1803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 14:12:34.067981    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:34.560930    1803 kapi.go:107] duration metric: took 1m12.526539834s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 14:12:34.567522    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:35.070300    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:35.569194    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:36.069299    1803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 14:12:36.569199    1803 kapi.go:107] duration metric: took 1m10.011638375s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 14:12:36.574256    1803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-355000 cluster.
	I1025 14:12:36.586255    1803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 14:12:36.590295    1803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 14:12:36.593227    1803 out.go:177] * Enabled addons: inspektor-gadget, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1025 14:12:36.597284    1803 addons.go:502] enable addons completed in 1m17.940149292s: enabled=[inspektor-gadget ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1025 14:12:36.597296    1803 start.go:233] waiting for cluster config update ...
	I1025 14:12:36.597304    1803 start.go:242] writing updated cluster config ...
	I1025 14:12:36.598297    1803 ssh_runner.go:195] Run: rm -f paused
	I1025 14:12:36.732660    1803 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 14:12:36.737358    1803 out.go:177] * Done! kubectl is now configured to use "addons-355000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-25 21:10:45 UTC, ends at Wed 2023-10-25 21:14:03 UTC. --
	Oct 25 21:13:43 addons-355000 dockerd[1166]: time="2023-10-25T21:13:43.216905916Z" level=info msg="shim disconnected" id=48ce3049bb551cd8684ff9f1f8f1e76df225a057f5394c8d105ab8b8da4e1246 namespace=moby
	Oct 25 21:13:43 addons-355000 dockerd[1166]: time="2023-10-25T21:13:43.216934749Z" level=warning msg="cleaning up after shim disconnected" id=48ce3049bb551cd8684ff9f1f8f1e76df225a057f5394c8d105ab8b8da4e1246 namespace=moby
	Oct 25 21:13:43 addons-355000 dockerd[1166]: time="2023-10-25T21:13:43.216938916Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:13:43 addons-355000 dockerd[1160]: time="2023-10-25T21:13:43.217019749Z" level=info msg="ignoring event" container=48ce3049bb551cd8684ff9f1f8f1e76df225a057f5394c8d105ab8b8da4e1246 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:13:55 addons-355000 dockerd[1166]: time="2023-10-25T21:13:55.788476931Z" level=info msg="shim disconnected" id=e514afcbd4f52463570e54790939b8f7b1db4110147d23fbf53b3fd5ef1f1fc0 namespace=moby
	Oct 25 21:13:55 addons-355000 dockerd[1166]: time="2023-10-25T21:13:55.788508139Z" level=warning msg="cleaning up after shim disconnected" id=e514afcbd4f52463570e54790939b8f7b1db4110147d23fbf53b3fd5ef1f1fc0 namespace=moby
	Oct 25 21:13:55 addons-355000 dockerd[1166]: time="2023-10-25T21:13:55.788559222Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:13:55 addons-355000 dockerd[1160]: time="2023-10-25T21:13:55.788751222Z" level=info msg="ignoring event" container=e514afcbd4f52463570e54790939b8f7b1db4110147d23fbf53b3fd5ef1f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:13:59 addons-355000 dockerd[1160]: time="2023-10-25T21:13:59.448996004Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0
	Oct 25 21:13:59 addons-355000 dockerd[1160]: time="2023-10-25T21:13:59.531316854Z" level=info msg="ignoring event" container=4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.531374396Z" level=info msg="shim disconnected" id=4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0 namespace=moby
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.531401521Z" level=warning msg="cleaning up after shim disconnected" id=4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0 namespace=moby
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.531405605Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:13:59 addons-355000 dockerd[1160]: time="2023-10-25T21:13:59.605600991Z" level=info msg="ignoring event" container=c3ab0e2a3e8a1bdf888a9a06e0f14febaf0a6382e302d3ac7d85f1b881c42090 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.605744908Z" level=info msg="shim disconnected" id=c3ab0e2a3e8a1bdf888a9a06e0f14febaf0a6382e302d3ac7d85f1b881c42090 namespace=moby
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.605814533Z" level=warning msg="cleaning up after shim disconnected" id=c3ab0e2a3e8a1bdf888a9a06e0f14febaf0a6382e302d3ac7d85f1b881c42090 namespace=moby
	Oct 25 21:13:59 addons-355000 dockerd[1166]: time="2023-10-25T21:13:59.605823283Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.386198752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.386230544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.386237419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.386241877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:14:00 addons-355000 dockerd[1160]: time="2023-10-25T21:14:00.437307330Z" level=info msg="ignoring event" container=5fabb230f7c23631f460210bba7275e050e0c7c7966e231892f462d29d03455b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.437413830Z" level=info msg="shim disconnected" id=5fabb230f7c23631f460210bba7275e050e0c7c7966e231892f462d29d03455b namespace=moby
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.437441955Z" level=warning msg="cleaning up after shim disconnected" id=5fabb230f7c23631f460210bba7275e050e0c7c7966e231892f462d29d03455b namespace=moby
	Oct 25 21:14:00 addons-355000 dockerd[1166]: time="2023-10-25T21:14:00.437446163Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5fabb230f7c23       97e050c3e21e9                                                                                                                3 seconds ago        Exited              hello-world-app           2                   215be1a9ea6fa       hello-world-app-5d77478584-hz5cp
	83e7cd0ae2cb6       nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77                                                29 seconds ago       Running             nginx                     0                   5f9926f3eed89       nginx
	f6c97b204018b       ghcr.io/headlamp-k8s/headlamp@sha256:db0310cf5abef3ffd5aa87509b1f61a150ee705808c5b29704149101653d418b                        49 seconds ago       Running             headlamp                  0                   746dde117de77       headlamp-94b766c-mxhl6
	1388c78a2082d       fc9db2894f4e4                                                                                                                58 seconds ago       Exited              helper-pod                0                   c021bc6866e92       helper-pod-delete-pvc-cc6901cf-d9fc-4f53-8183-180e9a68fcdf
	74e3b8f21cc16       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              About a minute ago   Exited              busybox                   0                   d57a2dc58f396       test-local-path
	f989820ff785a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                  0                   9fcb70fe0b721       gcp-auth-d4c87556c-gfq4v
	dfda46b85fd19       af594c6a879f2                                                                                                                About a minute ago   Exited              patch                     2                   2f4096f44a77c       ingress-nginx-admission-patch-txbbm
	2beb3afb6a04c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   2 minutes ago        Exited              create                    0                   e703dcd3eb87f       ingress-nginx-admission-create-gzn5f
	f30a76f8006a5       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner    0                   c70232565a65b       local-path-provisioner-78b46b4d5c-fs2sl
	f34ccfaa607e6       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   1ea2ca1e7c128       storage-provisioner
	5d4ff86e57099       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   774eaad7718dd       coredns-5dd5756b68-hkh4d
	ff1e6c00c8fb3       a5dd5cdd6d3ef                                                                                                                2 minutes ago        Running             kube-proxy                0                   47535643fb01e       kube-proxy-99n79
	ce42422b7b76f       42a4e73724daa                                                                                                                3 minutes ago        Running             kube-scheduler            0                   1b3f64b1b40de       kube-scheduler-addons-355000
	a2a0468afc9a6       8276439b4f237                                                                                                                3 minutes ago        Running             kube-controller-manager   0                   c740935b52b34       kube-controller-manager-addons-355000
	f3f02fcb2170e       9cdd6470f48c8                                                                                                                3 minutes ago        Running             etcd                      0                   d3520e9670818       etcd-addons-355000
	4ee7a937a6c77       537e9a59ee2fd                                                                                                                3 minutes ago        Running             kube-apiserver            0                   5f29956245c68       kube-apiserver-addons-355000
	
	* 
	* ==> coredns [5d4ff86e5709] <==
	* [INFO] 10.244.0.19:48678 - 58173 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027584s
	[INFO] 10.244.0.19:48678 - 46418 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042959s
	[INFO] 10.244.0.19:48678 - 48418 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027292s
	[INFO] 10.244.0.19:48678 - 39912 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041375s
	[INFO] 10.244.0.19:34981 - 31818 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035083s
	[INFO] 10.244.0.19:34981 - 35322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019083s
	[INFO] 10.244.0.19:34981 - 3997 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015083s
	[INFO] 10.244.0.19:34981 - 35409 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014667s
	[INFO] 10.244.0.19:34981 - 17005 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044541s
	[INFO] 10.244.0.19:34981 - 55094 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028209s
	[INFO] 10.244.0.19:34981 - 33939 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011917s
	[INFO] 10.244.0.19:58067 - 22927 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028917s
	[INFO] 10.244.0.19:45606 - 40101 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001375s
	[INFO] 10.244.0.19:45606 - 43381 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011542s
	[INFO] 10.244.0.19:58067 - 8857 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013s
	[INFO] 10.244.0.19:58067 - 20233 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027584s
	[INFO] 10.244.0.19:45606 - 37403 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010667s
	[INFO] 10.244.0.19:58067 - 39511 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013666s
	[INFO] 10.244.0.19:58067 - 11339 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000105s
	[INFO] 10.244.0.19:45606 - 9770 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051458s
	[INFO] 10.244.0.19:45606 - 14681 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012083s
	[INFO] 10.244.0.19:45606 - 43828 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004075s
	[INFO] 10.244.0.19:58067 - 51103 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009792s
	[INFO] 10.244.0.19:45606 - 51145 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048292s
	[INFO] 10.244.0.19:58067 - 45912 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000028917s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-355000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-355000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=addons-355000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T14_11_04_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-355000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:11:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-355000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:13:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:13:37 +0000   Wed, 25 Oct 2023 21:11:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:13:37 +0000   Wed, 25 Oct 2023 21:11:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:13:37 +0000   Wed, 25 Oct 2023 21:11:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:13:37 +0000   Wed, 25 Oct 2023 21:11:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-355000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	System Info:
	  Machine ID:                 79c829d071bd45078145eb34fece7b0d
	  System UUID:                79c829d071bd45078145eb34fece7b0d
	  Boot ID:                    6c055a0e-adb3-4066-97e6-b0225440c456
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-hz5cp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  gcp-auth                    gcp-auth-d4c87556c-gfq4v                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  headlamp                    headlamp-94b766c-mxhl6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-5dd5756b68-hkh4d                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m46s
	  kube-system                 etcd-addons-355000                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m
	  kube-system                 kube-apiserver-addons-355000               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-controller-manager-addons-355000      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-99n79                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-scheduler-addons-355000               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fs2sl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m46s                kube-proxy       
	  Normal  Starting                 3m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node addons-355000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node addons-355000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)  kubelet          Node addons-355000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m                   kubelet          Node addons-355000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m                   kubelet          Node addons-355000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m                   kubelet          Node addons-355000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m57s                kubelet          Node addons-355000 status is now: NodeReady
	  Normal  RegisteredNode           2m47s                node-controller  Node addons-355000 event: Registered Node addons-355000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.179181] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.079264] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +0.083982] systemd-fstab-generator[812]: Ignoring "noauto" for root device
	[  +1.220791] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.072017] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.071676] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.079114] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.082281] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +2.606692] systemd-fstab-generator[1153]: Ignoring "noauto" for root device
	[  +2.490713] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.214716] systemd-fstab-generator[1521]: Ignoring "noauto" for root device
	[Oct25 21:11] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[ +14.184691] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.005716] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.051883] kauditd_printk_skb: 37 callbacks suppressed
	[ +15.249663] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Oct25 21:12] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.246211] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.619913] kauditd_printk_skb: 11 callbacks suppressed
	[Oct25 21:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.050175] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.090957] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.735227] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.883711] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.865216] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [f3f02fcb2170] <==
	* {"level":"info","ts":"2023-10-25T21:11:25.94174Z","caller":"traceutil/trace.go:171","msg":"trace[436310828] linearizableReadLoop","detail":"{readStateIndex:793; appliedIndex:792; }","duration":"107.044875ms","start":"2023-10-25T21:11:25.834688Z","end":"2023-10-25T21:11:25.941732Z","steps":["trace[436310828] 'read index received'  (duration: 106.950459ms)","trace[436310828] 'applied index is now lower than readState.Index'  (duration: 94µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-25T21:11:25.94183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.145292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-6f48fc54bd-jrg8q.1791760144934353\" ","response":"range_response_count:1 size:855"}
	{"level":"info","ts":"2023-10-25T21:11:25.94185Z","caller":"traceutil/trace.go:171","msg":"trace[1462697867] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-6f48fc54bd-jrg8q.1791760144934353; range_end:; response_count:1; response_revision:776; }","duration":"107.179334ms","start":"2023-10-25T21:11:25.834666Z","end":"2023-10-25T21:11:25.941845Z","steps":["trace[1462697867] 'agreement among raft nodes before linearized reading'  (duration: 107.12925ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:11:25.941937Z","caller":"traceutil/trace.go:171","msg":"trace[1497781899] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"178.735583ms","start":"2023-10-25T21:11:25.763199Z","end":"2023-10-25T21:11:25.941934Z","steps":["trace[1497781899] 'process raft request'  (duration: 178.460417ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:11:27.081309Z","caller":"traceutil/trace.go:171","msg":"trace[1820366864] linearizableReadLoop","detail":"{readStateIndex:837; appliedIndex:836; }","duration":"240.3215ms","start":"2023-10-25T21:11:26.840978Z","end":"2023-10-25T21:11:27.081299Z","steps":["trace[1820366864] 'read index received'  (duration: 240.246708ms)","trace[1820366864] 'applied index is now lower than readState.Index'  (duration: 74.209µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-25T21:11:27.08134Z","caller":"traceutil/trace.go:171","msg":"trace[132639216] transaction","detail":"{read_only:false; response_revision:820; number_of_response:1; }","duration":"240.637917ms","start":"2023-10-25T21:11:26.840673Z","end":"2023-10-25T21:11:27.081311Z","steps":["trace[132639216] 'process raft request'  (duration: 240.5725ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:27.081373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.390042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-fhcqc\" ","response":"range_response_count:1 size:3216"}
	{"level":"info","ts":"2023-10-25T21:11:27.081384Z","caller":"traceutil/trace.go:171","msg":"trace[371963531] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-fhcqc; range_end:; response_count:1; response_revision:820; }","duration":"240.409001ms","start":"2023-10-25T21:11:26.840972Z","end":"2023-10-25T21:11:27.081381Z","steps":["trace[371963531] 'agreement among raft nodes before linearized reading'  (duration: 240.368084ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:27.081438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.449458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-d4c87556c-gfq4v\" ","response":"range_response_count:1 size:4148"}
	{"level":"info","ts":"2023-10-25T21:11:27.081444Z","caller":"traceutil/trace.go:171","msg":"trace[447865151] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-d4c87556c-gfq4v; range_end:; response_count:1; response_revision:820; }","duration":"240.456958ms","start":"2023-10-25T21:11:26.840986Z","end":"2023-10-25T21:11:27.081443Z","steps":["trace[447865151] 'agreement among raft nodes before linearized reading'  (duration: 240.442375ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:27.081491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.704584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-wvvth\" ","response":"range_response_count:1 size:4767"}
	{"level":"info","ts":"2023-10-25T21:11:27.081497Z","caller":"traceutil/trace.go:171","msg":"trace[770209064] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-wvvth; range_end:; response_count:1; response_revision:820; }","duration":"224.711417ms","start":"2023-10-25T21:11:26.856784Z","end":"2023-10-25T21:11:27.081496Z","steps":["trace[770209064] 'agreement among raft nodes before linearized reading'  (duration: 224.697959ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:27.081533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.518667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs\" ","response":"range_response_count:1 size:560"}
	{"level":"info","ts":"2023-10-25T21:11:27.08154Z","caller":"traceutil/trace.go:171","msg":"trace[1430134984] range","detail":"{range_begin:/registry/serviceaccounts/gcp-auth/minikube-gcp-auth-certs; range_end:; response_count:1; response_revision:820; }","duration":"240.526209ms","start":"2023-10-25T21:11:26.841011Z","end":"2023-10-25T21:11:27.081537Z","steps":["trace[1430134984] 'agreement among raft nodes before linearized reading'  (duration: 240.50875ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:30.754265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.663708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2023-10-25T21:11:30.754295Z","caller":"traceutil/trace.go:171","msg":"trace[297246187] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:836; }","duration":"110.699167ms","start":"2023-10-25T21:11:30.64359Z","end":"2023-10-25T21:11:30.754289Z","steps":["trace[297246187] 'range keys from in-memory index tree'  (duration: 110.602334ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:30.754308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.037958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-25T21:11:30.754322Z","caller":"traceutil/trace.go:171","msg":"trace[1802352671] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:836; }","duration":"106.052583ms","start":"2023-10-25T21:11:30.648265Z","end":"2023-10-25T21:11:30.754318Z","steps":["trace[1802352671] 'range keys from in-memory index tree'  (duration: 106.00625ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:30.7544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.908459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82532"}
	{"level":"info","ts":"2023-10-25T21:11:30.754408Z","caller":"traceutil/trace.go:171","msg":"trace[2079859666] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:836; }","duration":"118.916375ms","start":"2023-10-25T21:11:30.635489Z","end":"2023-10-25T21:11:30.754406Z","steps":["trace[2079859666] 'range keys from in-memory index tree'  (duration: 118.838125ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-25T21:11:30.754458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.002083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13485"}
	{"level":"info","ts":"2023-10-25T21:11:30.754464Z","caller":"traceutil/trace.go:171","msg":"trace[729163599] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:836; }","duration":"119.039875ms","start":"2023-10-25T21:11:30.635423Z","end":"2023-10-25T21:11:30.754463Z","steps":["trace[729163599] 'range keys from in-memory index tree'  (duration: 118.926333ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:11:37.0719Z","caller":"traceutil/trace.go:171","msg":"trace[844965768] transaction","detail":"{read_only:false; response_revision:859; number_of_response:1; }","duration":"286.221126ms","start":"2023-10-25T21:11:36.785668Z","end":"2023-10-25T21:11:37.07189Z","steps":["trace[844965768] 'process raft request'  (duration: 285.919626ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:32.603071Z","caller":"traceutil/trace.go:171","msg":"trace[316744066] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"118.314312ms","start":"2023-10-25T21:12:32.484746Z","end":"2023-10-25T21:12:32.603061Z","steps":["trace[316744066] 'process raft request'  (duration: 118.156062ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-25T21:12:33.298485Z","caller":"traceutil/trace.go:171","msg":"trace[768652297] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"105.646628ms","start":"2023-10-25T21:12:33.192831Z","end":"2023-10-25T21:12:33.298477Z","steps":["trace[768652297] 'process raft request'  (duration: 105.435045ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f989820ff785] <==
	* 2023/10/25 21:12:35 GCP Auth Webhook started!
	2023/10/25 21:12:43 Ready to marshal response ...
	2023/10/25 21:12:43 Ready to write response ...
	2023/10/25 21:12:46 Ready to marshal response ...
	2023/10/25 21:12:46 Ready to write response ...
	2023/10/25 21:12:56 Ready to marshal response ...
	2023/10/25 21:12:56 Ready to write response ...
	2023/10/25 21:12:56 Ready to marshal response ...
	2023/10/25 21:12:56 Ready to write response ...
	2023/10/25 21:13:04 Ready to marshal response ...
	2023/10/25 21:13:04 Ready to write response ...
	2023/10/25 21:13:10 Ready to marshal response ...
	2023/10/25 21:13:10 Ready to write response ...
	2023/10/25 21:13:10 Ready to marshal response ...
	2023/10/25 21:13:10 Ready to write response ...
	2023/10/25 21:13:10 Ready to marshal response ...
	2023/10/25 21:13:10 Ready to write response ...
	2023/10/25 21:13:15 Ready to marshal response ...
	2023/10/25 21:13:15 Ready to write response ...
	2023/10/25 21:13:31 Ready to marshal response ...
	2023/10/25 21:13:31 Ready to write response ...
	2023/10/25 21:13:40 Ready to marshal response ...
	2023/10/25 21:13:40 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:14:03 up 3 min,  0 users,  load average: 2.81, 1.15, 0.44
	Linux addons-355000 5.10.57 #1 SMP PREEMPT Mon Oct 16 17:34:05 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4ee7a937a6c7] <==
	* I1025 21:13:30.519666       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.525527       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.525541       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.533777       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.533800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.539517       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.539563       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.578711       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.578724       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.582528       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.582540       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:30.589844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:13:30.589867       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:13:31.034035       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 21:13:31.148242       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.104.196"}
	W1025 21:13:31.526538       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 21:13:31.579162       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 21:13:31.597531       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 21:13:31.875068       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1025 21:13:31.880756       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1025 21:13:32.888383       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1025 21:13:40.543110       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.126.44"}
	I1025 21:13:50.852455       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1025 21:13:56.502919       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1025 21:13:57.295408       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [a2a0468afc9a] <==
	* I1025 21:13:40.492064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.542µs"
	W1025 21:13:41.082921       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:41.082941       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:13:41.173144       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:41.173174       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:41.904019       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I1025 21:13:43.153448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.041µs"
	I1025 21:13:44.163380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="24.875µs"
	I1025 21:13:45.171519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.459µs"
	W1025 21:13:46.268409       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:46.268467       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:47.624000       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1025 21:13:47.624063       1 shared_informer.go:318] Caches are synced for resource quota
	W1025 21:13:47.765861       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:47.765882       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:48.039310       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1025 21:13:48.039350       1 shared_informer.go:318] Caches are synced for garbage collector
	W1025 21:13:50.298657       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:50.298685       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1025 21:13:52.407976       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:13:52.407995       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1025 21:13:56.431317       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1025 21:13:56.433829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="2.084µs"
	I1025 21:13:56.434349       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1025 21:14:01.276517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.667µs"
	
	* 
	* ==> kube-proxy [ff1e6c00c8fb] <==
	* I1025 21:11:17.766740       1 server_others.go:69] "Using iptables proxy"
	I1025 21:11:17.772343       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I1025 21:11:17.781567       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 21:11:17.781578       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 21:11:17.782320       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:11:17.782356       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:11:17.782460       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:11:17.782465       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:11:17.782884       1 config.go:188] "Starting service config controller"
	I1025 21:11:17.782907       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:11:17.782920       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:11:17.782923       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:11:17.783204       1 config.go:315] "Starting node config controller"
	I1025 21:11:17.783207       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:11:17.883371       1 shared_informer.go:318] Caches are synced for service config
	I1025 21:11:17.883379       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 21:11:17.883371       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ce42422b7b76] <==
	* W1025 21:11:01.313013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:01.313227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:01.313309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:11:01.313343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:11:01.313387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:11:01.313411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:11:01.313445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:01.313457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:01.313484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:11:01.313507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 21:11:01.313517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:11:01.313571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 21:11:01.313532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:11:01.313626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 21:11:01.313497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:11:01.313769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 21:11:02.222051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:11:02.222155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 21:11:02.222056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:11:02.222200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 21:11:02.304247       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:11:02.304267       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 21:11:02.316270       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:11:02.316349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1025 21:11:04.909306       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-25 21:10:45 UTC, ends at Wed 2023-10-25 21:14:04 UTC. --
	Oct 25 21:13:45 addons-355000 kubelet[2276]: E1025 21:13:45.165981    2276 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-hz5cp_default(bce87623-6283-4b3a-b949-0aebb9498fff)\"" pod="default/hello-world-app-5d77478584-hz5cp" podUID="bce87623-6283-4b3a-b949-0aebb9498fff"
	Oct 25 21:13:51 addons-355000 kubelet[2276]: I1025 21:13:51.361200    2276 scope.go:117] "RemoveContainer" containerID="7a42377cf2d90edf7af423e7a67bd93a71c350127f2a65c8750ebc45e6281b6a"
	Oct 25 21:13:51 addons-355000 kubelet[2276]: E1025 21:13:51.361344    2276 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(47767f94-eee8-477e-a5a7-32ab60af3ad5)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="47767f94-eee8-477e-a5a7-32ab60af3ad5"
	Oct 25 21:13:55 addons-355000 kubelet[2276]: I1025 21:13:55.991878    2276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgl79\" (UniqueName: \"kubernetes.io/projected/47767f94-eee8-477e-a5a7-32ab60af3ad5-kube-api-access-qgl79\") pod \"47767f94-eee8-477e-a5a7-32ab60af3ad5\" (UID: \"47767f94-eee8-477e-a5a7-32ab60af3ad5\") "
	Oct 25 21:13:55 addons-355000 kubelet[2276]: I1025 21:13:55.994877    2276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47767f94-eee8-477e-a5a7-32ab60af3ad5-kube-api-access-qgl79" (OuterVolumeSpecName: "kube-api-access-qgl79") pod "47767f94-eee8-477e-a5a7-32ab60af3ad5" (UID: "47767f94-eee8-477e-a5a7-32ab60af3ad5"). InnerVolumeSpecName "kube-api-access-qgl79". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:13:56 addons-355000 kubelet[2276]: I1025 21:13:56.092922    2276 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qgl79\" (UniqueName: \"kubernetes.io/projected/47767f94-eee8-477e-a5a7-32ab60af3ad5-kube-api-access-qgl79\") on node \"addons-355000\" DevicePath \"\""
	Oct 25 21:13:56 addons-355000 kubelet[2276]: I1025 21:13:56.228847    2276 scope.go:117] "RemoveContainer" containerID="7a42377cf2d90edf7af423e7a67bd93a71c350127f2a65c8750ebc45e6281b6a"
	Oct 25 21:13:56 addons-355000 kubelet[2276]: I1025 21:13:56.364542    2276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="47767f94-eee8-477e-a5a7-32ab60af3ad5" path="/var/lib/kubelet/pods/47767f94-eee8-477e-a5a7-32ab60af3ad5/volumes"
	Oct 25 21:13:58 addons-355000 kubelet[2276]: I1025 21:13:58.364003    2276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="27461fa6-b4a2-41c0-b48e-1a20646f0538" path="/var/lib/kubelet/pods/27461fa6-b4a2-41c0-b48e-1a20646f0538/volumes"
	Oct 25 21:13:58 addons-355000 kubelet[2276]: I1025 21:13:58.364157    2276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7e1346bd-55ab-4d8b-8e99-818ef1334911" path="/var/lib/kubelet/pods/7e1346bd-55ab-4d8b-8e99-818ef1334911/volumes"
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.716474    2276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbn5w\" (UniqueName: \"kubernetes.io/projected/e5058b70-ddf2-4324-bc5d-3fce020132f9-kube-api-access-fbn5w\") pod \"e5058b70-ddf2-4324-bc5d-3fce020132f9\" (UID: \"e5058b70-ddf2-4324-bc5d-3fce020132f9\") "
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.716719    2276 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5058b70-ddf2-4324-bc5d-3fce020132f9-webhook-cert\") pod \"e5058b70-ddf2-4324-bc5d-3fce020132f9\" (UID: \"e5058b70-ddf2-4324-bc5d-3fce020132f9\") "
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.721008    2276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5058b70-ddf2-4324-bc5d-3fce020132f9-kube-api-access-fbn5w" (OuterVolumeSpecName: "kube-api-access-fbn5w") pod "e5058b70-ddf2-4324-bc5d-3fce020132f9" (UID: "e5058b70-ddf2-4324-bc5d-3fce020132f9"). InnerVolumeSpecName "kube-api-access-fbn5w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.721052    2276 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5058b70-ddf2-4324-bc5d-3fce020132f9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e5058b70-ddf2-4324-bc5d-3fce020132f9" (UID: "e5058b70-ddf2-4324-bc5d-3fce020132f9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.817346    2276 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5058b70-ddf2-4324-bc5d-3fce020132f9-webhook-cert\") on node \"addons-355000\" DevicePath \"\""
	Oct 25 21:13:59 addons-355000 kubelet[2276]: I1025 21:13:59.817361    2276 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fbn5w\" (UniqueName: \"kubernetes.io/projected/e5058b70-ddf2-4324-bc5d-3fce020132f9-kube-api-access-fbn5w\") on node \"addons-355000\" DevicePath \"\""
	Oct 25 21:14:00 addons-355000 kubelet[2276]: I1025 21:14:00.260014    2276 scope.go:117] "RemoveContainer" containerID="4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0"
	Oct 25 21:14:00 addons-355000 kubelet[2276]: I1025 21:14:00.267550    2276 scope.go:117] "RemoveContainer" containerID="4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0"
	Oct 25 21:14:00 addons-355000 kubelet[2276]: E1025 21:14:00.267826    2276 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0" containerID="4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0"
	Oct 25 21:14:00 addons-355000 kubelet[2276]: I1025 21:14:00.267848    2276 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0"} err="failed to get container status \"4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4fe56672d47e61bffa2034c1b3b6f01e66a686488145444eefe8427e8fbae4f0"
	Oct 25 21:14:00 addons-355000 kubelet[2276]: I1025 21:14:00.360526    2276 scope.go:117] "RemoveContainer" containerID="48ce3049bb551cd8684ff9f1f8f1e76df225a057f5394c8d105ab8b8da4e1246"
	Oct 25 21:14:00 addons-355000 kubelet[2276]: I1025 21:14:00.370119    2276 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e5058b70-ddf2-4324-bc5d-3fce020132f9" path="/var/lib/kubelet/pods/e5058b70-ddf2-4324-bc5d-3fce020132f9/volumes"
	Oct 25 21:14:01 addons-355000 kubelet[2276]: I1025 21:14:01.270159    2276 scope.go:117] "RemoveContainer" containerID="48ce3049bb551cd8684ff9f1f8f1e76df225a057f5394c8d105ab8b8da4e1246"
	Oct 25 21:14:01 addons-355000 kubelet[2276]: I1025 21:14:01.270328    2276 scope.go:117] "RemoveContainer" containerID="5fabb230f7c23631f460210bba7275e050e0c7c7966e231892f462d29d03455b"
	Oct 25 21:14:01 addons-355000 kubelet[2276]: E1025 21:14:01.270439    2276 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-hz5cp_default(bce87623-6283-4b3a-b949-0aebb9498fff)\"" pod="default/hello-world-app-5d77478584-hz5cp" podUID="bce87623-6283-4b3a-b949-0aebb9498fff"
	
	* 
	* ==> storage-provisioner [f34ccfaa607e] <==
	* I1025 21:11:22.549447       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:11:22.582621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:11:22.582673       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:11:22.596764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:11:22.599555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-355000_401fb035-e122-452f-ba58-17860ed039c1!
	I1025 21:11:22.604986       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43086830-8c56-4799-84b2-b25c098cce9e", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-355000_401fb035-e122-452f-ba58-17860ed039c1 became leader
	I1025 21:11:22.741298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-355000_401fb035-e122-452f-ba58-17860ed039c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-355000 -n addons-355000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-355000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (33.60s)

                                                
                                    
x
+
TestCertOptions (10.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-673000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E1025 14:28:30.776433    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-673000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.795924958s)

                                                
                                                
-- stdout --
	* [cert-options-673000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-673000 in cluster cert-options-673000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-673000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-673000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-673000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-673000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-673000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (80.300709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-673000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-673000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-673000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-673000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-673000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (42.789917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-673000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-673000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-673000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-10-25 14:28:40.670641 -0700 PDT m=+1111.537545918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-673000 -n cert-options-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-673000 -n cert-options-673000: exit status 7 (32.023209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-673000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-673000
--- FAIL: TestCertOptions (10.08s)
E1025 14:29:06.790530    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:29:34.498871    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:29:52.699059    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.805227125s)

                                                
                                                
-- stdout --
	* [cert-expiration-410000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-410000 in cluster cert-expiration-410000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-410000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.256944292s)

                                                
                                                
-- stdout --
	* [cert-expiration-410000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-410000 in cluster cert-expiration-410000
	* Restarting existing qemu2 VM for "cert-expiration-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-410000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-410000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-410000 in cluster cert-expiration-410000
	* Restarting existing qemu2 VM for "cert-expiration-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-10-25 14:31:40.676232 -0700 PDT m=+1291.627192959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-410000 -n cert-expiration-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-410000 -n cert-expiration-410000: exit status 7 (70.839333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-410000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-410000
--- FAIL: TestCertExpiration (195.24s)

                                                
                                    
x
+
TestDockerFlags (10.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.883602916s)

                                                
                                                
-- stdout --
	* [docker-flags-830000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-830000 in cluster docker-flags-830000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-830000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:28:20.597858    3820 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:28:20.597985    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:20.597987    3820 out.go:309] Setting ErrFile to fd 2...
	I1025 14:28:20.597990    3820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:20.598135    3820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:28:20.599175    3820 out.go:303] Setting JSON to false
	I1025 14:28:20.615258    3820 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1674,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:28:20.615339    3820 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:28:20.621788    3820 out.go:177] * [docker-flags-830000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:28:20.629749    3820 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:28:20.633765    3820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:28:20.629779    3820 notify.go:220] Checking for updates...
	I1025 14:28:20.636736    3820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:28:20.639747    3820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:28:20.642787    3820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:28:20.644180    3820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:28:20.648054    3820 config.go:182] Loaded profile config "force-systemd-flag-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:20.648117    3820 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:20.648168    3820 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:28:20.652777    3820 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:28:20.658697    3820 start.go:298] selected driver: qemu2
	I1025 14:28:20.658704    3820 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:28:20.658709    3820 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:28:20.661123    3820 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:28:20.663704    3820 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:28:20.667657    3820 start_flags.go:921] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1025 14:28:20.667689    3820 cni.go:84] Creating CNI manager for ""
	I1025 14:28:20.667704    3820 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:28:20.667709    3820 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:28:20.667714    3820 start_flags.go:323] config:
	{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:docker-flags-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:28:20.672325    3820 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:28:20.679751    3820 out.go:177] * Starting control plane node docker-flags-830000 in cluster docker-flags-830000
	I1025 14:28:20.683719    3820 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:28:20.683735    3820 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:28:20.683747    3820 cache.go:56] Caching tarball of preloaded images
	I1025 14:28:20.683805    3820 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:28:20.683810    3820 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:28:20.683875    3820 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/docker-flags-830000/config.json ...
	I1025 14:28:20.683887    3820 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/docker-flags-830000/config.json: {Name:mk0736428ae0bcbe145d7e7e4d3fef9ab1435073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:28:20.684102    3820 start.go:365] acquiring machines lock for docker-flags-830000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:20.684136    3820 start.go:369] acquired machines lock for "docker-flags-830000" in 25.25µs
	I1025 14:28:20.684148    3820 start.go:93] Provisioning new machine with config: &{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:docker-flags-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:20.684186    3820 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:20.688793    3820 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:20.706088    3820 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="qemu2")
	I1025 14:28:20.706121    3820 client.go:168] LocalClient.Create starting
	I1025 14:28:20.706173    3820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:20.706200    3820 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:20.706209    3820 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:20.706246    3820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:20.706265    3820 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:20.706272    3820 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:20.706597    3820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:20.831214    3820 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:20.929401    3820 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:20.929408    3820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:20.929588    3820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:20.941829    3820 main.go:141] libmachine: STDOUT: 
	I1025 14:28:20.941845    3820 main.go:141] libmachine: STDERR: 
	I1025 14:28:20.941899    3820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2 +20000M
	I1025 14:28:20.952395    3820 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:20.952408    3820 main.go:141] libmachine: STDERR: 
	I1025 14:28:20.952426    3820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:20.952434    3820 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:20.952467    3820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c2:16:49:14:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:20.954153    3820 main.go:141] libmachine: STDOUT: 
	I1025 14:28:20.954167    3820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:20.954185    3820 client.go:171] LocalClient.Create took 248.058209ms
	I1025 14:28:22.956364    3820 start.go:128] duration metric: createHost completed in 2.272152875s
	I1025 14:28:22.956433    3820 start.go:83] releasing machines lock for "docker-flags-830000", held for 2.272287792s
	W1025 14:28:22.956521    3820 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:22.976385    3820 out.go:177] * Deleting "docker-flags-830000" in qemu2 ...
	W1025 14:28:22.994537    3820 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:22.994557    3820 start.go:706] Will try again in 5 seconds ...
	I1025 14:28:27.996826    3820 start.go:365] acquiring machines lock for docker-flags-830000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:28.095185    3820 start.go:369] acquired machines lock for "docker-flags-830000" in 98.216625ms
	I1025 14:28:28.095310    3820 start.go:93] Provisioning new machine with config: &{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:docker-flags-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:28.095553    3820 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:28.107164    3820 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:28.154355    3820 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="qemu2")
	I1025 14:28:28.154401    3820 client.go:168] LocalClient.Create starting
	I1025 14:28:28.154556    3820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:28.154614    3820 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:28.154633    3820 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:28.154696    3820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:28.154737    3820 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:28.154752    3820 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:28.155289    3820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:28.299452    3820 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:28.383991    3820 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:28.384002    3820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:28.384184    3820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:28.396363    3820 main.go:141] libmachine: STDOUT: 
	I1025 14:28:28.396382    3820 main.go:141] libmachine: STDERR: 
	I1025 14:28:28.396452    3820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2 +20000M
	I1025 14:28:28.407008    3820 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:28.407023    3820 main.go:141] libmachine: STDERR: 
	I1025 14:28:28.407036    3820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:28.407046    3820 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:28.407099    3820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:4e:34:65:ec:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/docker-flags-830000/disk.qcow2
	I1025 14:28:28.408807    3820 main.go:141] libmachine: STDOUT: 
	I1025 14:28:28.408822    3820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:28.408836    3820 client.go:171] LocalClient.Create took 254.429334ms
	I1025 14:28:30.411054    3820 start.go:128] duration metric: createHost completed in 2.315459375s
	I1025 14:28:30.411133    3820 start.go:83] releasing machines lock for "docker-flags-830000", held for 2.315905458s
	W1025 14:28:30.411723    3820 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:30.421366    3820 out.go:177] 
	W1025 14:28:30.425466    3820 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:28:30.425496    3820 out.go:239] * 
	* 
	W1025 14:28:30.428031    3820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:28:30.436359    3820 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (80.772584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-830000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.629291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-830000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-10-25 14:28:30.582563 -0700 PDT m=+1101.449470168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000: exit status 7 (31.72425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-830000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-830000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-830000
--- FAIL: TestDockerFlags (10.15s)

                                                
                                    
x
+
TestForceSystemdFlag (11.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-683000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-683000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.613626208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-683000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-683000 in cluster force-systemd-flag-683000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-683000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:28:13.894675    3796 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:28:13.894820    3796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:13.894823    3796 out.go:309] Setting ErrFile to fd 2...
	I1025 14:28:13.894826    3796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:13.894948    3796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:28:13.895949    3796 out.go:303] Setting JSON to false
	I1025 14:28:13.911768    3796 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1667,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:28:13.911840    3796 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:28:13.917955    3796 out.go:177] * [force-systemd-flag-683000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:28:13.924916    3796 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:28:13.924978    3796 notify.go:220] Checking for updates...
	I1025 14:28:13.930902    3796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:28:13.933882    3796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:28:13.936829    3796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:28:13.939901    3796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:28:13.942915    3796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:28:13.944600    3796 config.go:182] Loaded profile config "force-systemd-env-302000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:13.944667    3796 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:13.944709    3796 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:28:13.948860    3796 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:28:13.955717    3796 start.go:298] selected driver: qemu2
	I1025 14:28:13.955724    3796 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:28:13.955729    3796 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:28:13.958039    3796 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:28:13.960874    3796 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:28:13.963967    3796 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:28:13.963999    3796 cni.go:84] Creating CNI manager for ""
	I1025 14:28:13.964006    3796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:28:13.964012    3796 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:28:13.964016    3796 start_flags.go:323] config:
	{Name:force-systemd-flag-683000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-flag-683000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:28:13.968555    3796 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:28:13.975889    3796 out.go:177] * Starting control plane node force-systemd-flag-683000 in cluster force-systemd-flag-683000
	I1025 14:28:13.979916    3796 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:28:13.979931    3796 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:28:13.979948    3796 cache.go:56] Caching tarball of preloaded images
	I1025 14:28:13.980011    3796 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:28:13.980017    3796 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:28:13.980071    3796 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/force-systemd-flag-683000/config.json ...
	I1025 14:28:13.980082    3796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/force-systemd-flag-683000/config.json: {Name:mke745ed132f8be05c98457c328754ec7e439d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:28:13.980277    3796 start.go:365] acquiring machines lock for force-systemd-flag-683000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:13.980315    3796 start.go:369] acquired machines lock for "force-systemd-flag-683000" in 25.875µs
	I1025 14:28:13.980326    3796 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-flag-683000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:13.980362    3796 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:13.983850    3796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:14.000897    3796 start.go:159] libmachine.API.Create for "force-systemd-flag-683000" (driver="qemu2")
	I1025 14:28:14.000931    3796 client.go:168] LocalClient.Create starting
	I1025 14:28:14.000997    3796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:14.001023    3796 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:14.001036    3796 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:14.001072    3796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:14.001091    3796 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:14.001098    3796 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:14.001418    3796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:14.124686    3796 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:14.224935    3796 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:14.224940    3796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:14.225081    3796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:14.237312    3796 main.go:141] libmachine: STDOUT: 
	I1025 14:28:14.237330    3796 main.go:141] libmachine: STDERR: 
	I1025 14:28:14.237384    3796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2 +20000M
	I1025 14:28:14.247742    3796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:14.247755    3796 main.go:141] libmachine: STDERR: 
	I1025 14:28:14.247780    3796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:14.247791    3796 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:14.247819    3796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ca:c4:ec:14:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:14.249474    3796 main.go:141] libmachine: STDOUT: 
	I1025 14:28:14.249486    3796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:14.249503    3796 client.go:171] LocalClient.Create took 248.567833ms
	I1025 14:28:16.251844    3796 start.go:128] duration metric: createHost completed in 2.271404959s
	I1025 14:28:16.251971    3796 start.go:83] releasing machines lock for "force-systemd-flag-683000", held for 2.27164475s
	W1025 14:28:16.252023    3796 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:16.259290    3796 out.go:177] * Deleting "force-systemd-flag-683000" in qemu2 ...
	W1025 14:28:16.281044    3796 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:16.281068    3796 start.go:706] Will try again in 5 seconds ...
	I1025 14:28:21.283291    3796 start.go:365] acquiring machines lock for force-systemd-flag-683000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:22.956591    3796 start.go:369] acquired machines lock for "force-systemd-flag-683000" in 1.673174292s
	I1025 14:28:22.956710    3796 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-flag-683000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:22.956926    3796 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:22.969455    3796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:23.017263    3796 start.go:159] libmachine.API.Create for "force-systemd-flag-683000" (driver="qemu2")
	I1025 14:28:23.017306    3796 client.go:168] LocalClient.Create starting
	I1025 14:28:23.017447    3796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:23.017507    3796 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:23.017528    3796 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:23.017586    3796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:23.017621    3796 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:23.017640    3796 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:23.018130    3796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:23.162303    3796 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:23.404829    3796 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:23.404843    3796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:23.405050    3796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:23.417870    3796 main.go:141] libmachine: STDOUT: 
	I1025 14:28:23.417891    3796 main.go:141] libmachine: STDERR: 
	I1025 14:28:23.417971    3796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2 +20000M
	I1025 14:28:23.428539    3796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:23.428560    3796 main.go:141] libmachine: STDERR: 
	I1025 14:28:23.428573    3796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:23.428579    3796 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:23.428616    3796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:a2:da:02:d7:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-flag-683000/disk.qcow2
	I1025 14:28:23.430291    3796 main.go:141] libmachine: STDOUT: 
	I1025 14:28:23.430304    3796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:23.430316    3796 client.go:171] LocalClient.Create took 413.002208ms
	I1025 14:28:25.432525    3796 start.go:128] duration metric: createHost completed in 2.475563459s
	I1025 14:28:25.432612    3796 start.go:83] releasing machines lock for "force-systemd-flag-683000", held for 2.475977167s
	W1025 14:28:25.433007    3796 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:25.446830    3796 out.go:177] 
	W1025 14:28:25.450900    3796 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:28:25.450929    3796 out.go:239] * 
	* 
	W1025 14:28:25.453662    3796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:28:25.463752    3796 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-683000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-683000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-683000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.092917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-683000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-683000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-10-25 14:28:25.559141 -0700 PDT m=+1096.426049168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-683000 -n force-systemd-flag-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-683000 -n force-systemd-flag-683000: exit status 7 (34.880291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-683000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-683000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-683000
--- FAIL: TestForceSystemdFlag (11.83s)

                                                
                                    
x
+
TestForceSystemdEnv (10.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-302000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-302000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.898360083s)

                                                
                                                
-- stdout --
	* [force-systemd-env-302000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-302000 in cluster force-systemd-env-302000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-302000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:28:10.483200    3776 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:28:10.483364    3776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:10.483368    3776 out.go:309] Setting ErrFile to fd 2...
	I1025 14:28:10.483370    3776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:28:10.483494    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:28:10.484542    3776 out.go:303] Setting JSON to false
	I1025 14:28:10.501005    3776 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1664,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:28:10.501109    3776 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:28:10.506337    3776 out.go:177] * [force-systemd-env-302000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:28:10.513274    3776 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:28:10.517341    3776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:28:10.513367    3776 notify.go:220] Checking for updates...
	I1025 14:28:10.523284    3776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:28:10.526326    3776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:28:10.529228    3776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:28:10.532267    3776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1025 14:28:10.535599    3776 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:28:10.535647    3776 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:28:10.539193    3776 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:28:10.546262    3776 start.go:298] selected driver: qemu2
	I1025 14:28:10.546269    3776 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:28:10.546274    3776 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:28:10.548436    3776 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:28:10.549713    3776 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:28:10.552363    3776 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:28:10.552389    3776 cni.go:84] Creating CNI manager for ""
	I1025 14:28:10.552395    3776 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:28:10.552399    3776 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:28:10.552403    3776 start_flags.go:323] config:
	{Name:force-systemd-env-302000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:28:10.556628    3776 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:28:10.564218    3776 out.go:177] * Starting control plane node force-systemd-env-302000 in cluster force-systemd-env-302000
	I1025 14:28:10.568273    3776 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:28:10.568287    3776 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:28:10.568293    3776 cache.go:56] Caching tarball of preloaded images
	I1025 14:28:10.568343    3776 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:28:10.568348    3776 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:28:10.568389    3776 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/force-systemd-env-302000/config.json ...
	I1025 14:28:10.568398    3776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/force-systemd-env-302000/config.json: {Name:mkc9682b192be118e37eebd19e6ed3d78b9d1c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:28:10.568585    3776 start.go:365] acquiring machines lock for force-systemd-env-302000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:10.568613    3776 start.go:369] acquired machines lock for "force-systemd-env-302000" in 20.875µs
	I1025 14:28:10.568623    3776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:10.568654    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:10.577259    3776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:10.592150    3776 start.go:159] libmachine.API.Create for "force-systemd-env-302000" (driver="qemu2")
	I1025 14:28:10.592175    3776 client.go:168] LocalClient.Create starting
	I1025 14:28:10.592234    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:10.592261    3776 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:10.592273    3776 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:10.592308    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:10.592331    3776 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:10.592341    3776 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:10.592695    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:10.713723    3776 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:10.781454    3776 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:10.781466    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:10.781664    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:10.794529    3776 main.go:141] libmachine: STDOUT: 
	I1025 14:28:10.794557    3776 main.go:141] libmachine: STDERR: 
	I1025 14:28:10.794636    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2 +20000M
	I1025 14:28:10.806518    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:10.806548    3776 main.go:141] libmachine: STDERR: 
	I1025 14:28:10.806572    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:10.806577    3776 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:10.806612    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:8c:29:56:21:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:10.808740    3776 main.go:141] libmachine: STDOUT: 
	I1025 14:28:10.808761    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:10.808780    3776 client.go:171] LocalClient.Create took 216.600833ms
	I1025 14:28:12.810998    3776 start.go:128] duration metric: createHost completed in 2.242310917s
	I1025 14:28:12.811105    3776 start.go:83] releasing machines lock for "force-systemd-env-302000", held for 2.242475166s
	W1025 14:28:12.811308    3776 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:12.823340    3776 out.go:177] * Deleting "force-systemd-env-302000" in qemu2 ...
	W1025 14:28:12.845840    3776 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:12.845871    3776 start.go:706] Will try again in 5 seconds ...
	I1025 14:28:17.848105    3776 start.go:365] acquiring machines lock for force-systemd-env-302000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:28:17.848490    3776 start.go:369] acquired machines lock for "force-systemd-env-302000" in 308.333µs
	I1025 14:28:17.848629    3776 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:28:17.848945    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:28:17.857550    3776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 14:28:17.903996    3776 start.go:159] libmachine.API.Create for "force-systemd-env-302000" (driver="qemu2")
	I1025 14:28:17.904053    3776 client.go:168] LocalClient.Create starting
	I1025 14:28:17.904142    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:28:17.904197    3776 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:17.904215    3776 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:17.904276    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:28:17.904310    3776 main.go:141] libmachine: Decoding PEM data...
	I1025 14:28:17.904326    3776 main.go:141] libmachine: Parsing certificate...
	I1025 14:28:17.904796    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:28:18.038923    3776 main.go:141] libmachine: Creating SSH key...
	I1025 14:28:18.281670    3776 main.go:141] libmachine: Creating Disk image...
	I1025 14:28:18.281682    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:28:18.281863    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:18.294589    3776 main.go:141] libmachine: STDOUT: 
	I1025 14:28:18.294610    3776 main.go:141] libmachine: STDERR: 
	I1025 14:28:18.294701    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2 +20000M
	I1025 14:28:18.305262    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:28:18.305277    3776 main.go:141] libmachine: STDERR: 
	I1025 14:28:18.305296    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:18.305311    3776 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:28:18.305355    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b8:34:67:f0:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/force-systemd-env-302000/disk.qcow2
	I1025 14:28:18.307073    3776 main.go:141] libmachine: STDOUT: 
	I1025 14:28:18.307087    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:28:18.307100    3776 client.go:171] LocalClient.Create took 403.039917ms
	I1025 14:28:20.309323    3776 start.go:128] duration metric: createHost completed in 2.460331084s
	I1025 14:28:20.309411    3776 start.go:83] releasing machines lock for "force-systemd-env-302000", held for 2.460897166s
	W1025 14:28:20.309869    3776 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:28:20.318499    3776 out.go:177] 
	W1025 14:28:20.322613    3776 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:28:20.322642    3776 out.go:239] * 
	* 
	W1025 14:28:20.325288    3776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:28:20.335576    3776 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-302000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-302000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-302000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (78.139792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-302000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-302000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-10-25 14:28:20.430138 -0700 PDT m=+1091.297047543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-302000 -n force-systemd-env-302000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-302000 -n force-systemd-env-302000: exit status 7 (34.222791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-302000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-302000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-302000
--- FAIL: TestForceSystemdEnv (10.12s)

                                                
                                    
x
+
TestErrorSpam/setup (18.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-607000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-607000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 --driver=qemu2 : exit status 90 (18.649224042s)

                                                
                                                
-- stdout --
	* [nospam-607000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-607000 in cluster nospam-607000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-607000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 --driver=qemu2 " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job failed. See \"journalctl -xe\" for details."
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-607000] minikube v1.31.2 on Darwin 14.0 (arm64)
- MINIKUBE_LOCATION=17488
- KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-607000 in cluster nospam-607000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (18.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-260000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-260000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qqzcm" [da9084e5-8519-43f0-a897-46be5d030584] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qqzcm" [da9084e5-8519-43f0-a897-46be5d030584] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006558667s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31897
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31897: Get "http://192.168.105.4:31897": dial tcp 192.168.105.4:31897: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-260000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-qqzcm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-260000/192.168.105.4
Start Time:       Wed, 25 Oct 2023 14:19:28 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://846459fcedb64a5b3e2e8f7f800edd9e697768fccec21d0f8fdacd3ac06b8b47
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 25 Oct 2023 14:19:45 -0700
Finished:     Wed, 25 Oct 2023 14:19:45 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 25 Oct 2023 14:19:29 -0700
Finished:     Wed, 25 Oct 2023 14:19:29 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qnfxv (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-qnfxv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-qqzcm to functional-260000
Normal   Pulled     13s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    13s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    13s (x2 over 28s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-qqzcm_default(da9084e5-8519-43f0-a897-46be5d030584)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-260000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-260000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.182.208
IPs:                      10.97.182.208
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31897/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-260000 -n functional-260000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-260000 ssh -- ls                                                                                         | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh cat                                                                                           | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | /mount-9p/test-1698268784394534000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh stat                                                                                          | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh stat                                                                                          | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh sudo                                                                                          | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port543950386/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh -- ls                                                                                         | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh sudo                                                                                          | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-260000 ssh findmnt                                                                                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-260000 --dry-run                                                                                      | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-260000                                                                                                | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|           | -p functional-260000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:19:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:19:52.719543    2705 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:19:52.719688    2705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.719691    2705 out.go:309] Setting ErrFile to fd 2...
	I1025 14:19:52.719694    2705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.719820    2705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:19:52.721213    2705 out.go:303] Setting JSON to false
	I1025 14:19:52.738326    2705 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1166,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:19:52.738422    2705 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:19:52.742570    2705 out.go:177] * [functional-260000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	I1025 14:19:52.749530    2705 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:19:52.749589    2705 notify.go:220] Checking for updates...
	I1025 14:19:52.756560    2705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:19:52.759570    2705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:19:52.762526    2705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:19:52.765552    2705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:19:52.768510    2705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:19:52.771773    2705 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:19:52.772004    2705 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:19:52.776490    2705 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1025 14:19:52.783537    2705 start.go:298] selected driver: qemu2
	I1025 14:19:52.783545    2705 start.go:902] validating driver "qemu2" against &{Name:functional-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:functional-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:19:52.783605    2705 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:19:52.788517    2705 out.go:177] 
	W1025 14:19:52.792553    2705 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 14:19:52.795572    2705 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-25 21:17:06 UTC, ends at Wed 2023-10-25 21:19:58 UTC. --
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.721044881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.721072505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.721094339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.723463327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.723494368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.723563743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:19:53 functional-260000 dockerd[6746]: time="2023-10-25T21:19:53.723576034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:53 functional-260000 cri-dockerd[7001]: time="2023-10-25T21:19:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea0ed63aea1aeef97a466dbce5db7011fb7a4e5faf63c3047ddec15251772aaf/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 21:19:53 functional-260000 cri-dockerd[7001]: time="2023-10-25T21:19:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1ce64c625946e38faa48695bd79acb0343b6fbfc5f5627eccb5782559ec494f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 25 21:19:54 functional-260000 dockerd[6740]: time="2023-10-25T21:19:54.098336792Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.068371641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.068621848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.068923221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.068932471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.105933234Z" level=info msg="shim disconnected" id=a2cb9703f3db2db85c6457fec00043a2aad96793a738ddf9dfe1d734a54736a9 namespace=moby
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.105963776Z" level=warning msg="cleaning up after shim disconnected" id=a2cb9703f3db2db85c6457fec00043a2aad96793a738ddf9dfe1d734a54736a9 namespace=moby
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.105968109Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:19:55 functional-260000 dockerd[6740]: time="2023-10-25T21:19:55.106089775Z" level=info msg="ignoring event" container=a2cb9703f3db2db85c6457fec00043a2aad96793a738ddf9dfe1d734a54736a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.116050014Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:19:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Oct 25 21:19:55 functional-260000 cri-dockerd[7001]: time="2023-10-25T21:19:55Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.744515948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.744653988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.744751571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:19:55 functional-260000 dockerd[6746]: time="2023-10-25T21:19:55.744779654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:19:55 functional-260000 dockerd[6740]: time="2023-10-25T21:19:55.940424455Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	2b8f134ca45b6       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 seconds ago        Running             dashboard-metrics-scraper   0                   ea0ed63aea1ae       dashboard-metrics-scraper-7fd5cb4ddc-66mbj
	a2cb9703f3db2       72565bf5bbedf                                                                                          3 seconds ago        Exited              echoserver-arm              3                   9bed8e74ce72b       hello-node-759d89bdcc-mxg96
	8a8816bcfee80       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    12 seconds ago       Exited              mount-munger                0                   4a3e5769037c6       busybox-mount
	846459fcedb64       72565bf5bbedf                                                                                          13 seconds ago       Exited              echoserver-arm              2                   47687d7fa4a99       hello-node-connect-7799dfb7c6-qqzcm
	2a733b535ff39       nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de                          22 seconds ago       Running             myfrontend                  0                   af0d081876e04       sp-pod
	4cbca6586ca99       nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77                          38 seconds ago       Running             nginx                       0                   9c21850c362a7       nginx-svc
	738d91db33c2a       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   c0f7920fc6f6d       storage-provisioner
	72bc0a869c357       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   93cf2327ddf17       coredns-5dd5756b68-k84pz
	089678878f527       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         2                   c0f7920fc6f6d       storage-provisioner
	6c1a433b8dbe3       a5dd5cdd6d3ef                                                                                          About a minute ago   Running             kube-proxy                  2                   a20717014e6eb       kube-proxy-znc5c
	c37637190c6f7       42a4e73724daa                                                                                          About a minute ago   Running             kube-scheduler              2                   cfdd3c28a9b8e       kube-scheduler-functional-260000
	b4c65fe2189f3       9cdd6470f48c8                                                                                          About a minute ago   Running             etcd                        2                   3c9cb2347218a       etcd-functional-260000
	91a8aa9b4f493       537e9a59ee2fd                                                                                          About a minute ago   Running             kube-apiserver              0                   34105513e4b9c       kube-apiserver-functional-260000
	c4ce90c92a307       8276439b4f237                                                                                          About a minute ago   Running             kube-controller-manager     2                   048b5d2a4bca6       kube-controller-manager-functional-260000
	5ddb891b42af7       9cdd6470f48c8                                                                                          2 minutes ago        Exited              etcd                        1                   27de23521292f       etcd-functional-260000
	18b46705a5935       42a4e73724daa                                                                                          2 minutes ago        Exited              kube-scheduler              1                   c5216accdf5c3       kube-scheduler-functional-260000
	d23077c40c42c       a5dd5cdd6d3ef                                                                                          2 minutes ago        Exited              kube-proxy                  1                   593af1b5f63b5       kube-proxy-znc5c
	a367728399265       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   c848e050ddfef       coredns-5dd5756b68-k84pz
	72fb5640000a7       8276439b4f237                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   c3b65e4fb207e       kube-controller-manager-functional-260000
	
	* 
	* ==> coredns [72bc0a869c35] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52212 - 7019 "HINFO IN 598052636073933014.7897127380945901574. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011976169s
	[INFO] 10.244.0.1:29265 - 58332 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00010875s
	[INFO] 10.244.0.1:9581 - 57710 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000087833s
	[INFO] 10.244.0.1:55472 - 61396 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00003025s
	[INFO] 10.244.0.1:56225 - 12603 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001201249s
	[INFO] 10.244.0.1:29638 - 3161 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00010125s
	[INFO] 10.244.0.1:20552 - 30523 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000129875s
	
	* 
	* ==> coredns [a36772839926] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48228 - 36601 "HINFO IN 3957733486594829460.8521077467237888223. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004613008s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-260000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-260000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=functional-260000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T14_17_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-260000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:19:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:19:42 +0000   Wed, 25 Oct 2023 21:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:19:42 +0000   Wed, 25 Oct 2023 21:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:19:42 +0000   Wed, 25 Oct 2023 21:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:19:42 +0000   Wed, 25 Oct 2023 21:17:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-260000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf7fdbec43d141728cedff527f298d05
	  System UUID:                cf7fdbec43d141728cedff527f298d05
	  Boot ID:                    759230c4-288f-4e8f-8f5b-f2a14d9b2801
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-mxg96                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  default                     hello-node-connect-7799dfb7c6-qqzcm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-5dd5756b68-k84pz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m22s
	  kube-system                 etcd-functional-260000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-functional-260000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-functional-260000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-znc5c                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-functional-260000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-66mbj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fg5k5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 77s                    kube-proxy       
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node functional-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node functional-260000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m39s (x7 over 2m39s)  kubelet          Node functional-260000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node functional-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node functional-260000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node functional-260000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m31s                  kubelet          Node functional-260000 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                  node-controller  Node functional-260000 event: Registered Node functional-260000 in Controller
	  Normal  NodeNotReady             2m15s                  kubelet          Node functional-260000 status is now: NodeNotReady
	  Normal  RegisteredNode           107s                   node-controller  Node functional-260000 event: Registered Node functional-260000 in Controller
	  Normal  Starting                 81s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)      kubelet          Node functional-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)      kubelet          Node functional-260000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 81s)      kubelet          Node functional-260000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                    node-controller  Node functional-260000 event: Registered Node functional-260000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.133593] systemd-fstab-generator[3852]: Ignoring "noauto" for root device
	[  +0.080189] systemd-fstab-generator[3863]: Ignoring "noauto" for root device
	[  +0.094816] systemd-fstab-generator[3876]: Ignoring "noauto" for root device
	[  +5.151329] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.298984] systemd-fstab-generator[4439]: Ignoring "noauto" for root device
	[  +0.072504] systemd-fstab-generator[4450]: Ignoring "noauto" for root device
	[  +0.062574] systemd-fstab-generator[4461]: Ignoring "noauto" for root device
	[  +0.073742] systemd-fstab-generator[4472]: Ignoring "noauto" for root device
	[  +0.087579] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +5.531951] kauditd_printk_skb: 34 callbacks suppressed
	[Oct25 21:18] systemd-fstab-generator[6281]: Ignoring "noauto" for root device
	[  +0.135837] systemd-fstab-generator[6315]: Ignoring "noauto" for root device
	[  +0.085338] systemd-fstab-generator[6326]: Ignoring "noauto" for root device
	[  +0.090627] systemd-fstab-generator[6339]: Ignoring "noauto" for root device
	[ +11.384531] systemd-fstab-generator[6892]: Ignoring "noauto" for root device
	[  +0.065348] systemd-fstab-generator[6903]: Ignoring "noauto" for root device
	[  +0.069187] systemd-fstab-generator[6914]: Ignoring "noauto" for root device
	[  +0.067148] systemd-fstab-generator[6925]: Ignoring "noauto" for root device
	[  +0.087466] systemd-fstab-generator[6994]: Ignoring "noauto" for root device
	[  +0.856177] systemd-fstab-generator[7249]: Ignoring "noauto" for root device
	[  +4.628915] kauditd_printk_skb: 29 callbacks suppressed
	[Oct25 21:19] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.927025] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +13.620455] kauditd_printk_skb: 1 callbacks suppressed
	[ +13.407800] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [5ddb891b42af] <==
	* {"level":"info","ts":"2023-10-25T21:17:56.580532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:17:58.176325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-25T21:17:58.176479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-25T21:17:58.176562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-10-25T21:17:58.176608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-10-25T21:17:58.176629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-25T21:17:58.176664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-10-25T21:17:58.176702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-25T21:17:58.178992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-260000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T21:17:58.17926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:17:58.179036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:17:58.181856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-25T21:17:58.182132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T21:17:58.182178Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-25T21:17:58.183047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-10-25T21:18:24.501478Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-25T21:18:24.501512Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-10-25T21:18:24.501549Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-25T21:18:24.501594Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-25T21:18:24.509368Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-25T21:18:24.50939Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-25T21:18:24.509429Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-10-25T21:18:24.510602Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-25T21:18:24.510627Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-25T21:18:24.510631Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-260000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [b4c65fe2189f] <==
	* {"level":"info","ts":"2023-10-25T21:18:37.888094Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-25T21:18:37.888113Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-25T21:18:37.88822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-10-25T21:18:37.888269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-10-25T21:18:37.888338Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:18:37.888365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:18:37.890888Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-25T21:18:37.891017Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-25T21:18:37.891051Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-25T21:18:37.891083Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-25T21:18:37.891117Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-25T21:18:39.782587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-25T21:18:39.782751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-25T21:18:39.782825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-25T21:18:39.78286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-10-25T21:18:39.782882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-25T21:18:39.782909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-10-25T21:18:39.78293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-25T21:18:39.785265Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-260000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T21:18:39.785268Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:18:39.785691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T21:18:39.78574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-25T21:18:39.78558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:18:39.788213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-10-25T21:18:39.788254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:19:59 up 2 min,  0 users,  load average: 1.17, 0.50, 0.19
	Linux functional-260000 5.10.57 #1 SMP PREEMPT Mon Oct 16 17:34:05 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [91a8aa9b4f49] <==
	* I1025 21:18:40.449708       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:18:40.449628       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 21:18:40.451503       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 21:18:40.469679       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 21:18:40.469736       1 aggregator.go:166] initial CRD sync complete...
	I1025 21:18:40.469743       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 21:18:40.469746       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 21:18:40.469749       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:18:41.351311       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 21:18:41.468817       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1025 21:18:41.469280       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 21:18:41.471251       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:18:41.960008       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 21:18:41.963188       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 21:18:41.986518       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 21:18:41.993733       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:18:41.996851       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 21:19:00.958371       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.180.218"}
	I1025 21:19:06.766797       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 21:19:06.823987       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.32.25"}
	I1025 21:19:17.194427       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.41.166"}
	I1025 21:19:28.694334       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.182.208"}
	I1025 21:19:53.277783       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 21:19:53.355731       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.4.98"}
	I1025 21:19:53.382516       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.226.213"}
	
	* 
	* ==> kube-controller-manager [72fb5640000a] <==
	* I1025 21:18:11.601366       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1025 21:18:11.609170       1 shared_informer.go:318] Caches are synced for disruption
	I1025 21:18:11.618429       1 shared_informer.go:318] Caches are synced for deployment
	I1025 21:18:11.619630       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1025 21:18:11.619688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.914µs"
	I1025 21:18:11.620776       1 shared_informer.go:318] Caches are synced for PV protection
	I1025 21:18:11.625056       1 shared_informer.go:318] Caches are synced for ephemeral
	I1025 21:18:11.626163       1 shared_informer.go:318] Caches are synced for daemon sets
	I1025 21:18:11.645635       1 shared_informer.go:318] Caches are synced for job
	I1025 21:18:11.645684       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1025 21:18:11.645709       1 shared_informer.go:318] Caches are synced for stateful set
	I1025 21:18:11.694963       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 21:18:11.705087       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 21:18:11.716303       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 21:18:11.722792       1 shared_informer.go:318] Caches are synced for taint
	I1025 21:18:11.722844       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1025 21:18:11.722863       1 taint_manager.go:211] "Sending events to api server"
	I1025 21:18:11.722888       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1025 21:18:11.722947       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-260000"
	I1025 21:18:11.722999       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1025 21:18:11.723055       1 event.go:307] "Event occurred" object="functional-260000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-260000 event: Registered Node functional-260000 in Controller"
	I1025 21:18:11.799466       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 21:18:12.105872       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 21:18:12.105896       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 21:18:12.120144       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [c4ce90c92a30] <==
	* E1025 21:19:53.318673       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 21:19:53.318689       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1025 21:19:53.319378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.397708ms"
	E1025 21:19:53.319589       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 21:19:53.322748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.925157ms"
	E1025 21:19:53.322760       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 21:19:53.322785       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1025 21:19:53.328353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.732366ms"
	E1025 21:19:53.328387       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 21:19:53.328407       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1025 21:19:53.341379       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-66mbj"
	I1025 21:19:53.351648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.953185ms"
	I1025 21:19:53.356592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.839434ms"
	I1025 21:19:53.358919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="11.5µs"
	I1025 21:19:53.356969       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-fg5k5"
	I1025 21:19:53.361483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.794612ms"
	I1025 21:19:53.364847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="21.083µs"
	I1025 21:19:53.369583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.999918ms"
	I1025 21:19:53.386511       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1025 21:19:53.387430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.738826ms"
	I1025 21:19:53.387485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.166µs"
	I1025 21:19:55.614664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="30.083µs"
	I1025 21:19:56.628167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="4.035061ms"
	I1025 21:19:56.628191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="10.625µs"
	I1025 21:19:59.056374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.708µs"
	
	* 
	* ==> kube-proxy [6c1a433b8dbe] <==
	* I1025 21:18:41.642668       1 server_others.go:69] "Using iptables proxy"
	I1025 21:18:41.658094       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1025 21:18:41.667342       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 21:18:41.667357       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 21:18:41.668700       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:18:41.668720       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:18:41.668782       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:18:41.668790       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:18:41.669060       1 config.go:188] "Starting service config controller"
	I1025 21:18:41.669068       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:18:41.669075       1 config.go:315] "Starting node config controller"
	I1025 21:18:41.669077       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:18:41.669164       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:18:41.669167       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:18:41.769121       1 shared_informer.go:318] Caches are synced for service config
	I1025 21:18:41.769121       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:18:41.770207       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [d23077c40c42] <==
	* I1025 21:17:56.687950       1 server_others.go:69] "Using iptables proxy"
	I1025 21:17:58.835535       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1025 21:17:58.856216       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1025 21:17:58.856231       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 21:17:58.856892       1 server_others.go:152] "Using iptables Proxier"
	I1025 21:17:58.856920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 21:17:58.856981       1 server.go:846] "Version info" version="v1.28.3"
	I1025 21:17:58.856989       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:17:58.857592       1 config.go:188] "Starting service config controller"
	I1025 21:17:58.857604       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 21:17:58.857612       1 config.go:97] "Starting endpoint slice config controller"
	I1025 21:17:58.857614       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 21:17:58.857793       1 config.go:315] "Starting node config controller"
	I1025 21:17:58.857799       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 21:17:58.958016       1 shared_informer.go:318] Caches are synced for node config
	I1025 21:17:58.958036       1 shared_informer.go:318] Caches are synced for service config
	I1025 21:17:58.958105       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [18b46705a593] <==
	* I1025 21:17:56.848305       1 serving.go:348] Generated self-signed cert in-memory
	W1025 21:17:58.787566       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 21:17:58.787669       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 21:17:58.787689       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 21:17:58.787713       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 21:17:58.825920       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1025 21:17:58.825977       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:17:58.826851       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 21:17:58.827030       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 21:17:58.827086       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:17:58.827137       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 21:17:58.927573       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:18:24.469223       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1025 21:18:24.469257       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1025 21:18:24.469303       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1025 21:18:24.469415       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [c37637190c6f] <==
	* I1025 21:18:38.574886       1 serving.go:348] Generated self-signed cert in-memory
	W1025 21:18:40.375111       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 21:18:40.375176       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 21:18:40.375198       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 21:18:40.375224       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 21:18:40.389339       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1025 21:18:40.389391       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:18:40.390369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 21:18:40.390924       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 21:18:40.391022       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:18:40.391105       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 21:18:40.491690       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-25 21:17:06 UTC, ends at Wed 2023-10-25 21:19:59 UTC. --
	Oct 25 21:19:45 functional-260000 kubelet[7255]: I1025 21:19:45.545230    7255 scope.go:117] "RemoveContainer" containerID="c0987ad97e4e849c628a7ea952431f245054a6fc07a002f8d5f53d0230374d20"
	Oct 25 21:19:45 functional-260000 kubelet[7255]: I1025 21:19:45.545389    7255 scope.go:117] "RemoveContainer" containerID="846459fcedb64a5b3e2e8f7f800edd9e697768fccec21d0f8fdacd3ac06b8b47"
	Oct 25 21:19:45 functional-260000 kubelet[7255]: E1025 21:19:45.545470    7255 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-qqzcm_default(da9084e5-8519-43f0-a897-46be5d030584)\"" pod="default/hello-node-connect-7799dfb7c6-qqzcm" podUID="da9084e5-8519-43f0-a897-46be5d030584"
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.679010    7255 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzxpj\" (UniqueName: \"kubernetes.io/projected/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-kube-api-access-tzxpj\") pod \"e07f164c-5fd5-45c0-97c7-5196ccb67d7b\" (UID: \"e07f164c-5fd5-45c0-97c7-5196ccb67d7b\") "
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.679029    7255 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-test-volume\") pod \"e07f164c-5fd5-45c0-97c7-5196ccb67d7b\" (UID: \"e07f164c-5fd5-45c0-97c7-5196ccb67d7b\") "
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.679051    7255 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-test-volume" (OuterVolumeSpecName: "test-volume") pod "e07f164c-5fd5-45c0-97c7-5196ccb67d7b" (UID: "e07f164c-5fd5-45c0-97c7-5196ccb67d7b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.679651    7255 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-kube-api-access-tzxpj" (OuterVolumeSpecName: "kube-api-access-tzxpj") pod "e07f164c-5fd5-45c0-97c7-5196ccb67d7b" (UID: "e07f164c-5fd5-45c0-97c7-5196ccb67d7b"). InnerVolumeSpecName "kube-api-access-tzxpj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.779845    7255 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tzxpj\" (UniqueName: \"kubernetes.io/projected/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-kube-api-access-tzxpj\") on node \"functional-260000\" DevicePath \"\""
	Oct 25 21:19:48 functional-260000 kubelet[7255]: I1025 21:19:48.779860    7255 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e07f164c-5fd5-45c0-97c7-5196ccb67d7b-test-volume\") on node \"functional-260000\" DevicePath \"\""
	Oct 25 21:19:49 functional-260000 kubelet[7255]: I1025 21:19:49.576421    7255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3e5769037c6f3658d9ce45539831d1d449dd7564810e2367203fd481ec656b"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.349076    7255 topology_manager.go:215] "Topology Admit Handler" podUID="c566cadc-36aa-49cf-b8ca-40366ec786e4" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-66mbj"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: E1025 21:19:53.349359    7255 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e07f164c-5fd5-45c0-97c7-5196ccb67d7b" containerName="mount-munger"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.349388    7255 memory_manager.go:346] "RemoveStaleState removing state" podUID="e07f164c-5fd5-45c0-97c7-5196ccb67d7b" containerName="mount-munger"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.366087    7255 topology_manager.go:215] "Topology Admit Handler" podUID="2f0e6978-2215-457e-837b-9f59a35dd000" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-fg5k5"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.505760    7255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngjd\" (UniqueName: \"kubernetes.io/projected/2f0e6978-2215-457e-837b-9f59a35dd000-kube-api-access-bngjd\") pod \"kubernetes-dashboard-8694d4445c-fg5k5\" (UID: \"2f0e6978-2215-457e-837b-9f59a35dd000\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fg5k5"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.505788    7255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2f0e6978-2215-457e-837b-9f59a35dd000-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fg5k5\" (UID: \"2f0e6978-2215-457e-837b-9f59a35dd000\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fg5k5"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.505802    7255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqtq5\" (UniqueName: \"kubernetes.io/projected/c566cadc-36aa-49cf-b8ca-40366ec786e4-kube-api-access-vqtq5\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-66mbj\" (UID: \"c566cadc-36aa-49cf-b8ca-40366ec786e4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-66mbj"
	Oct 25 21:19:53 functional-260000 kubelet[7255]: I1025 21:19:53.505811    7255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c566cadc-36aa-49cf-b8ca-40366ec786e4-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-66mbj\" (UID: \"c566cadc-36aa-49cf-b8ca-40366ec786e4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-66mbj"
	Oct 25 21:19:55 functional-260000 kubelet[7255]: I1025 21:19:55.045615    7255 scope.go:117] "RemoveContainer" containerID="fc7927d6c38565e7f6e3665453cbae281b2b0b143f3264db67f79c498fbd7845"
	Oct 25 21:19:55 functional-260000 kubelet[7255]: I1025 21:19:55.608571    7255 scope.go:117] "RemoveContainer" containerID="fc7927d6c38565e7f6e3665453cbae281b2b0b143f3264db67f79c498fbd7845"
	Oct 25 21:19:55 functional-260000 kubelet[7255]: I1025 21:19:55.608862    7255 scope.go:117] "RemoveContainer" containerID="a2cb9703f3db2db85c6457fec00043a2aad96793a738ddf9dfe1d734a54736a9"
	Oct 25 21:19:55 functional-260000 kubelet[7255]: E1025 21:19:55.608971    7255 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-mxg96_default(b778b482-bc6e-426d-81a1-b58b90477652)\"" pod="default/hello-node-759d89bdcc-mxg96" podUID="b778b482-bc6e-426d-81a1-b58b90477652"
	Oct 25 21:19:56 functional-260000 kubelet[7255]: I1025 21:19:56.624306    7255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-66mbj" podStartSLOduration=1.7830134530000001 podCreationTimestamp="2023-10-25 21:19:53 +0000 UTC" firstStartedPulling="2023-10-25 21:19:53.86547369 +0000 UTC m=+76.911673298" lastFinishedPulling="2023-10-25 21:19:55.706740939 +0000 UTC m=+78.752940546" observedRunningTime="2023-10-25 21:19:56.62415416 +0000 UTC m=+79.670353767" watchObservedRunningTime="2023-10-25 21:19:56.624280701 +0000 UTC m=+79.670480308"
	Oct 25 21:19:59 functional-260000 kubelet[7255]: I1025 21:19:59.043034    7255 scope.go:117] "RemoveContainer" containerID="846459fcedb64a5b3e2e8f7f800edd9e697768fccec21d0f8fdacd3ac06b8b47"
	Oct 25 21:19:59 functional-260000 kubelet[7255]: E1025 21:19:59.043134    7255 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-qqzcm_default(da9084e5-8519-43f0-a897-46be5d030584)\"" pod="default/hello-node-connect-7799dfb7c6-qqzcm" podUID="da9084e5-8519-43f0-a897-46be5d030584"
	
	* 
	* ==> storage-provisioner [089678878f52] <==
	* I1025 21:18:41.634450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 21:18:41.641555       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [738d91db33c2] <==
	* I1025 21:18:53.104581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:18:53.110750       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:18:53.110770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:19:10.495664       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:19:10.495846       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43f2be80-6c6c-4ee3-a99b-d4091f4cceb0", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-260000_78611b74-dc80-47b0-a04c-b759103dfa9a became leader
	I1025 21:19:10.495861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-260000_78611b74-dc80-47b0-a04c-b759103dfa9a!
	I1025 21:19:10.596820       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-260000_78611b74-dc80-47b0-a04c-b759103dfa9a!
	I1025 21:19:24.489146       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1025 21:19:24.489223       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    13febcce-86bd-4254-ac56-5dd3511f52b6 386 0 2023-10-25 21:17:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-10-25 21:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2c528aab-70d3-4d79-a1f3-97d204e4c52d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2c528aab-70d3-4d79-a1f3-97d204e4c52d 706 0 2023-10-25 21:19:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-10-25 21:19:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-10-25 21:19:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1025 21:19:24.489570       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2c528aab-70d3-4d79-a1f3-97d204e4c52d" provisioned
	I1025 21:19:24.489610       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1025 21:19:24.489625       1 volume_store.go:212] Trying to save persistentvolume "pvc-2c528aab-70d3-4d79-a1f3-97d204e4c52d"
	I1025 21:19:24.489773       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2c528aab-70d3-4d79-a1f3-97d204e4c52d", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1025 21:19:24.494346       1 volume_store.go:219] persistentvolume "pvc-2c528aab-70d3-4d79-a1f3-97d204e4c52d" saved
	I1025 21:19:24.494466       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2c528aab-70d3-4d79-a1f3-97d204e4c52d", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2c528aab-70d3-4d79-a1f3-97d204e4c52d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-260000 -n functional-260000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-260000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-8694d4445c-fg5k5
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-260000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-fg5k5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-260000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-fg5k5: exit status 1 (40.768791ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-260000/192.168.105.4
	Start Time:       Wed, 25 Oct 2023 14:19:45 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://8a8816bcfee80d43050a9dad15f1bf9d3d3f101de2ddb7bc3550911cd6d90c0a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 25 Oct 2023 14:19:46 -0700
	      Finished:     Wed, 25 Oct 2023 14:19:46 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tzxpj (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tzxpj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-260000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.015s (1.015s including waiting)
	  Normal  Created    13s   kubelet            Created container mount-munger
	  Normal  Started    13s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-fg5k5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-260000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-fg5k5: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-069000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-069000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 9ca15fe6f818
	Removing intermediate container 9ca15fe6f818
	 ---> 850f5caaa300
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 5c6ea5c35681
	Removing intermediate container 5c6ea5c35681
	 ---> 4fff45e527fe
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in d0de61462cd5
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-069000 -n image-069000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-069000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh findmnt            | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| mount          | -p functional-260000                     | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|                | --kill=true                              |                   |         |         |                     |                     |
	| start          | -p functional-260000                     | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-260000 --dry-run           | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-260000                     | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:20 PDT |
	|                | -p functional-260000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-260000 ssh pgrep              | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-260000 image build -t         | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | localhost/my-image:functional-260000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-260000 image ls               | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	| image          | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| delete         | -p functional-260000                     | functional-260000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	| start          | -p image-069000 --driver=qemu2           | image-069000      | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000      | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-069000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000      | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-069000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:20:02
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:20:02.740900    2764 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:20:02.741031    2764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:20:02.741032    2764 out.go:309] Setting ErrFile to fd 2...
	I1025 14:20:02.741035    2764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:20:02.741164    2764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:20:02.742156    2764 out.go:303] Setting JSON to false
	I1025 14:20:02.759453    2764 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1176,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:20:02.759520    2764 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:20:02.763717    2764 out.go:177] * [image-069000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:20:02.770764    2764 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:20:02.770823    2764 notify.go:220] Checking for updates...
	I1025 14:20:02.777662    2764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:20:02.780723    2764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:20:02.783660    2764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:20:02.786707    2764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:20:02.789707    2764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:20:02.792889    2764 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:20:02.796676    2764 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:20:02.802673    2764 start.go:298] selected driver: qemu2
	I1025 14:20:02.802678    2764 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:20:02.802685    2764 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:20:02.802758    2764 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:20:02.806663    2764 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:20:02.813374    2764 start_flags.go:386] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 14:20:02.813460    2764 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:20:02.813473    2764 cni.go:84] Creating CNI manager for ""
	I1025 14:20:02.813480    2764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:20:02.813484    2764 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:20:02.813491    2764 start_flags.go:323] config:
	{Name:image-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:image-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:20:02.818150    2764 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:20:02.824718    2764 out.go:177] * Starting control plane node image-069000 in cluster image-069000
	I1025 14:20:02.828562    2764 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:20:02.828584    2764 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:20:02.828594    2764 cache.go:56] Caching tarball of preloaded images
	I1025 14:20:02.828659    2764 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:20:02.828663    2764 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:20:02.828877    2764 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/config.json ...
	I1025 14:20:02.828886    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/config.json: {Name:mk7f50abbbd9c6c5d7c47363a5f4e8b4f20d4a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:02.829079    2764 start.go:365] acquiring machines lock for image-069000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:20:02.829107    2764 start.go:369] acquired machines lock for "image-069000" in 24.291µs
	I1025 14:20:02.829119    2764 start.go:93] Provisioning new machine with config: &{Name:image-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:image-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:20:02.829152    2764 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:20:02.837518    2764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 14:20:02.860130    2764 start.go:159] libmachine.API.Create for "image-069000" (driver="qemu2")
	I1025 14:20:02.860154    2764 client.go:168] LocalClient.Create starting
	I1025 14:20:02.860223    2764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:20:02.860246    2764 main.go:141] libmachine: Decoding PEM data...
	I1025 14:20:02.860254    2764 main.go:141] libmachine: Parsing certificate...
	I1025 14:20:02.860285    2764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:20:02.860300    2764 main.go:141] libmachine: Decoding PEM data...
	I1025 14:20:02.860307    2764 main.go:141] libmachine: Parsing certificate...
	I1025 14:20:02.860615    2764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:20:03.110661    2764 main.go:141] libmachine: Creating SSH key...
	I1025 14:20:03.203674    2764 main.go:141] libmachine: Creating Disk image...
	I1025 14:20:03.203678    2764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:20:03.203842    2764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2
	I1025 14:20:03.261864    2764 main.go:141] libmachine: STDOUT: 
	I1025 14:20:03.261884    2764 main.go:141] libmachine: STDERR: 
	I1025 14:20:03.261944    2764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2 +20000M
	I1025 14:20:03.272816    2764 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:20:03.272828    2764 main.go:141] libmachine: STDERR: 
	I1025 14:20:03.272849    2764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2
	I1025 14:20:03.272857    2764 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:20:03.272893    2764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d0:5d:a9:ae:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/disk.qcow2
	I1025 14:20:03.325699    2764 main.go:141] libmachine: STDOUT: 
	I1025 14:20:03.325718    2764 main.go:141] libmachine: STDERR: 
	I1025 14:20:03.325722    2764 main.go:141] libmachine: Attempt 0
	I1025 14:20:03.325733    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:03.329444    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:03.329459    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:03.329465    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:03.329469    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:05.331650    2764 main.go:141] libmachine: Attempt 1
	I1025 14:20:05.331703    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:05.332082    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:05.332140    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:05.332182    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:05.332220    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:07.334467    2764 main.go:141] libmachine: Attempt 2
	I1025 14:20:07.334520    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:07.334809    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:07.334851    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:07.334878    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:07.334905    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:09.337069    2764 main.go:141] libmachine: Attempt 3
	I1025 14:20:09.337084    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:09.337178    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:09.337188    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:09.337193    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:09.337197    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:11.339241    2764 main.go:141] libmachine: Attempt 4
	I1025 14:20:11.339245    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:11.339276    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:11.339282    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:11.339287    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:11.339291    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:13.341324    2764 main.go:141] libmachine: Attempt 5
	I1025 14:20:13.341328    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:13.341352    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:13.341356    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:13.341360    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:13.341365    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:15.343446    2764 main.go:141] libmachine: Attempt 6
	I1025 14:20:15.343455    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:15.343593    2764 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1025 14:20:15.343606    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:15.343613    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:15.343617    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:17.345732    2764 main.go:141] libmachine: Attempt 7
	I1025 14:20:17.345777    2764 main.go:141] libmachine: Searching for d6:d0:5d:a9:ae:28 in /var/db/dhcpd_leases ...
	I1025 14:20:17.345867    2764 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:17.345878    2764 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:17.345881    2764 main.go:141] libmachine: Found match: d6:d0:5d:a9:ae:28
	I1025 14:20:17.345891    2764 main.go:141] libmachine: IP: 192.168.105.5
	I1025 14:20:17.345895    2764 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1025 14:20:18.350533    2764 machine.go:88] provisioning docker machine ...
	I1025 14:20:18.350548    2764 buildroot.go:166] provisioning hostname "image-069000"
	I1025 14:20:18.350597    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:18.350905    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:18.350910    2764 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-069000 && echo "image-069000" | sudo tee /etc/hostname
	I1025 14:20:18.424473    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: image-069000
	
	I1025 14:20:18.424517    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:18.424777    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:18.424784    2764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-069000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-069000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-069000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 14:20:18.495498    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 14:20:18.495505    2764 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-1304/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-1304/.minikube}
	I1025 14:20:18.495512    2764 buildroot.go:174] setting up certificates
	I1025 14:20:18.495515    2764 provision.go:83] configureAuth start
	I1025 14:20:18.495518    2764 provision.go:138] copyHostCerts
	I1025 14:20:18.495582    2764 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem, removing ...
	I1025 14:20:18.495586    2764 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem
	I1025 14:20:18.495711    2764 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem (1082 bytes)
	I1025 14:20:18.495882    2764 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem, removing ...
	I1025 14:20:18.495883    2764 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem
	I1025 14:20:18.495922    2764 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem (1123 bytes)
	I1025 14:20:18.496010    2764 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem, removing ...
	I1025 14:20:18.496014    2764 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem
	I1025 14:20:18.496063    2764 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem (1675 bytes)
	I1025 14:20:18.496147    2764 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem org=jenkins.image-069000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-069000]
	I1025 14:20:18.598973    2764 provision.go:172] copyRemoteCerts
	I1025 14:20:18.599007    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 14:20:18.599013    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:18.637846    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1025 14:20:18.645387    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 14:20:18.652507    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 14:20:18.659138    2764 provision.go:86] duration metric: configureAuth took 163.61925ms
	I1025 14:20:18.659144    2764 buildroot.go:189] setting minikube options for container-runtime
	I1025 14:20:18.659242    2764 config.go:182] Loaded profile config "image-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:20:18.659281    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:18.659484    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:18.659487    2764 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 14:20:18.728418    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 14:20:18.728423    2764 buildroot.go:70] root file system type: tmpfs
	I1025 14:20:18.728474    2764 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 14:20:18.728519    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:18.728769    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:18.728805    2764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 14:20:18.803184    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 14:20:18.803229    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:18.803499    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:18.803508    2764 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 14:20:19.151200    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 14:20:19.151208    2764 machine.go:91] provisioned docker machine in 800.667625ms
	I1025 14:20:19.151212    2764 client.go:171] LocalClient.Create took 16.291052833s
	I1025 14:20:19.151221    2764 start.go:167] duration metric: libmachine.API.Create for "image-069000" took 16.291098167s
	I1025 14:20:19.151225    2764 start.go:300] post-start starting for "image-069000" (driver="qemu2")
	I1025 14:20:19.151230    2764 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 14:20:19.151274    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 14:20:19.151281    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:19.188217    2764 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 14:20:19.189719    2764 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 14:20:19.189723    2764 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/addons for local assets ...
	I1025 14:20:19.189797    2764 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/files for local assets ...
	I1025 14:20:19.189891    2764 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem -> 17232.pem in /etc/ssl/certs
	I1025 14:20:19.189993    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 14:20:19.192742    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem --> /etc/ssl/certs/17232.pem (1708 bytes)
	I1025 14:20:19.199949    2764 start.go:303] post-start completed in 48.718584ms
	I1025 14:20:19.200370    2764 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/config.json ...
	I1025 14:20:19.200525    2764 start.go:128] duration metric: createHost completed in 16.371366542s
	I1025 14:20:19.200553    2764 main.go:141] libmachine: Using SSH client type: native
	I1025 14:20:19.200768    2764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050f30c0] 0x1050f5830 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1025 14:20:19.200771    2764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1025 14:20:19.268442    2764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698268819.636239043
	
	I1025 14:20:19.268447    2764 fix.go:206] guest clock: 1698268819.636239043
	I1025 14:20:19.268452    2764 fix.go:219] Guest: 2023-10-25 14:20:19.636239043 -0700 PDT Remote: 2023-10-25 14:20:19.200526 -0700 PDT m=+16.481748293 (delta=435.713043ms)
	I1025 14:20:19.268460    2764 fix.go:190] guest clock delta is within tolerance: 435.713043ms
	I1025 14:20:19.268462    2764 start.go:83] releasing machines lock for "image-069000", held for 16.439348667s
	I1025 14:20:19.268704    2764 ssh_runner.go:195] Run: cat /version.json
	I1025 14:20:19.268710    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:19.268716    2764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 14:20:19.268736    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:19.349444    2764 ssh_runner.go:195] Run: systemctl --version
	I1025 14:20:19.351518    2764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 14:20:19.353265    2764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 14:20:19.353291    2764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 14:20:19.358630    2764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 14:20:19.358635    2764 start.go:472] detecting cgroup driver to use...
	I1025 14:20:19.358702    2764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:20:19.364442    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 14:20:19.367614    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 14:20:19.370511    2764 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 14:20:19.370532    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 14:20:19.373865    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:20:19.377326    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 14:20:19.380894    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:20:19.384394    2764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 14:20:19.387553    2764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 14:20:19.390460    2764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 14:20:19.393600    2764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 14:20:19.396921    2764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:20:19.476162    2764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 14:20:19.484309    2764 start.go:472] detecting cgroup driver to use...
	I1025 14:20:19.484356    2764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 14:20:19.490192    2764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:20:19.495108    2764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 14:20:19.501119    2764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:20:19.506142    2764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:20:19.511043    2764 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 14:20:19.577518    2764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:20:19.583112    2764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:20:19.588477    2764 ssh_runner.go:195] Run: which cri-dockerd
	I1025 14:20:19.589697    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 14:20:19.592848    2764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 14:20:19.597724    2764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 14:20:19.663066    2764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 14:20:19.743885    2764 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 14:20:19.743940    2764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 14:20:19.749250    2764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:20:19.827869    2764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:20:20.995523    2764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.167640833s)
	I1025 14:20:20.995577    2764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 14:20:21.071664    2764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 14:20:21.147197    2764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 14:20:21.223838    2764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:20:21.297238    2764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 14:20:21.305014    2764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:20:21.399714    2764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 14:20:21.422647    2764 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 14:20:21.422716    2764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 14:20:21.425689    2764 start.go:540] Will wait 60s for crictl version
	I1025 14:20:21.425726    2764 ssh_runner.go:195] Run: which crictl
	I1025 14:20:21.426986    2764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 14:20:21.449802    2764 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 14:20:21.449864    2764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:20:21.459856    2764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:20:21.473571    2764 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 14:20:21.473686    2764 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1025 14:20:21.475044    2764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:20:21.478635    2764 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:20:21.478672    2764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:20:21.483798    2764 docker.go:693] Got preloaded images: 
	I1025 14:20:21.483803    2764 docker.go:699] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1025 14:20:21.483840    2764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:20:21.486825    2764 ssh_runner.go:195] Run: which lz4
	I1025 14:20:21.488181    2764 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1025 14:20:21.489470    2764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 14:20:21.489477    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357729134 bytes)
	I1025 14:20:22.823281    2764 docker.go:657] Took 1.335126 seconds to copy over tarball
	I1025 14:20:22.823331    2764 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 14:20:23.887148    2764 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.063804s)
	I1025 14:20:23.887158    2764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 14:20:23.902581    2764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:20:23.906022    2764 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1025 14:20:23.911126    2764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:20:23.986827    2764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:20:25.446506    2764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459651s)
	I1025 14:20:25.446586    2764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:20:25.452565    2764 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 14:20:25.452571    2764 cache_images.go:84] Images are preloaded, skipping loading
	I1025 14:20:25.452624    2764 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 14:20:25.460264    2764 cni.go:84] Creating CNI manager for ""
	I1025 14:20:25.460271    2764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:20:25.460279    2764 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 14:20:25.460287    2764 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-069000 NodeName:image-069000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 14:20:25.460362    2764 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-069000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 14:20:25.460389    2764 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-069000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:image-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 14:20:25.460446    2764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 14:20:25.463494    2764 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 14:20:25.463518    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 14:20:25.466185    2764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1025 14:20:25.471316    2764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 14:20:25.476369    2764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1025 14:20:25.481639    2764 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I1025 14:20:25.483195    2764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:20:25.486754    2764 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000 for IP: 192.168.105.5
	I1025 14:20:25.486760    2764 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b24ebfb6727e8dcf7d0828ec4a3e725ccc80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.486895    2764 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key
	I1025 14:20:25.486928    2764 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key
	I1025 14:20:25.486952    2764 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.key
	I1025 14:20:25.486958    2764 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.crt with IP's: []
	I1025 14:20:25.629382    2764 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.crt ...
	I1025 14:20:25.629386    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.crt: {Name:mk73de249c8b2e9368722e3f2c7fc1b5e602a404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.629661    2764 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.key ...
	I1025 14:20:25.629662    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/client.key: {Name:mk1ff35c512b355c38b9b31d2390e83be8bb0ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.629775    2764 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key.e69b33ca
	I1025 14:20:25.629780    2764 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 14:20:25.710160    2764 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt.e69b33ca ...
	I1025 14:20:25.710162    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt.e69b33ca: {Name:mk9450d8c36fe923c0f6223378954c44f85a7ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.710278    2764 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key.e69b33ca ...
	I1025 14:20:25.710279    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key.e69b33ca: {Name:mkea2176eaa38020e19271720df8725ecc866fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.710376    2764 certs.go:337] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt
	I1025 14:20:25.710558    2764 certs.go:341] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key
	I1025 14:20:25.710665    2764 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.key
	I1025 14:20:25.710671    2764 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.crt with IP's: []
	I1025 14:20:25.763584    2764 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.crt ...
	I1025 14:20:25.763587    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.crt: {Name:mk8eda386e4e9e00db5734c0cc93476e7d1e3adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.763728    2764 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.key ...
	I1025 14:20:25.763731    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.key: {Name:mkab2af6b89a447337588ffaedffd76f81ab4738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:25.763951    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723.pem (1338 bytes)
	W1025 14:20:25.763976    2764 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723_empty.pem, impossibly tiny 0 bytes
	I1025 14:20:25.763980    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 14:20:25.763996    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem (1082 bytes)
	I1025 14:20:25.764011    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem (1123 bytes)
	I1025 14:20:25.764026    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem (1675 bytes)
	I1025 14:20:25.764060    2764 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem (1708 bytes)
	I1025 14:20:25.764393    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 14:20:25.771981    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 14:20:25.778466    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 14:20:25.785789    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/image-069000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 14:20:25.793347    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 14:20:25.800755    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 14:20:25.807668    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 14:20:25.814369    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 14:20:25.821643    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem --> /usr/share/ca-certificates/17232.pem (1708 bytes)
	I1025 14:20:25.829190    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 14:20:25.836406    2764 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723.pem --> /usr/share/ca-certificates/1723.pem (1338 bytes)
	I1025 14:20:25.843371    2764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 14:20:25.848204    2764 ssh_runner.go:195] Run: openssl version
	I1025 14:20:25.850097    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1723.pem && ln -fs /usr/share/ca-certificates/1723.pem /etc/ssl/certs/1723.pem"
	I1025 14:20:25.853725    2764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1723.pem
	I1025 14:20:25.855461    2764 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:16 /usr/share/ca-certificates/1723.pem
	I1025 14:20:25.855480    2764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1723.pem
	I1025 14:20:25.857310    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1723.pem /etc/ssl/certs/51391683.0"
	I1025 14:20:25.860641    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17232.pem && ln -fs /usr/share/ca-certificates/17232.pem /etc/ssl/certs/17232.pem"
	I1025 14:20:25.863812    2764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17232.pem
	I1025 14:20:25.865189    2764 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:16 /usr/share/ca-certificates/17232.pem
	I1025 14:20:25.865207    2764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17232.pem
	I1025 14:20:25.867149    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17232.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 14:20:25.870258    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 14:20:25.873658    2764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:20:25.875354    2764 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:20:25.875372    2764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:20:25.877189    2764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 14:20:25.880575    2764 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 14:20:25.881965    2764 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 14:20:25.881989    2764 kubeadm.go:404] StartCluster: {Name:image-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:image-069000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:20:25.882049    2764 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 14:20:25.895818    2764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 14:20:25.898650    2764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 14:20:25.901793    2764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 14:20:25.905130    2764 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 14:20:25.905141    2764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 14:20:25.927479    2764 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 14:20:25.927548    2764 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 14:20:25.992051    2764 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 14:20:25.992119    2764 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 14:20:25.992161    2764 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 14:20:26.086790    2764 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 14:20:26.095992    2764 out.go:204]   - Generating certificates and keys ...
	I1025 14:20:26.096034    2764 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 14:20:26.096069    2764 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 14:20:26.202687    2764 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 14:20:26.374559    2764 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 14:20:26.443048    2764 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 14:20:26.522030    2764 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 14:20:26.655540    2764 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 14:20:26.655602    2764 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-069000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1025 14:20:26.691780    2764 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 14:20:26.691853    2764 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-069000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1025 14:20:26.774230    2764 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 14:20:26.814864    2764 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 14:20:27.032096    2764 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 14:20:27.032131    2764 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 14:20:27.179344    2764 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 14:20:27.362767    2764 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 14:20:27.454591    2764 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 14:20:27.581577    2764 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 14:20:27.581808    2764 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 14:20:27.582780    2764 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 14:20:27.594001    2764 out.go:204]   - Booting up control plane ...
	I1025 14:20:27.594070    2764 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 14:20:27.594108    2764 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 14:20:27.594140    2764 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 14:20:27.594194    2764 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 14:20:27.594247    2764 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 14:20:27.594265    2764 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 14:20:27.672701    2764 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 14:20:31.673553    2764 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.000856 seconds
	I1025 14:20:31.673612    2764 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 14:20:31.679145    2764 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 14:20:32.188499    2764 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 14:20:32.188635    2764 kubeadm.go:322] [mark-control-plane] Marking the node image-069000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 14:20:32.693575    2764 kubeadm.go:322] [bootstrap-token] Using token: 8zsidf.z8er39l9b6eppy8p
	I1025 14:20:32.699896    2764 out.go:204]   - Configuring RBAC rules ...
	I1025 14:20:32.699947    2764 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 14:20:32.700854    2764 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 14:20:32.704656    2764 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 14:20:32.705821    2764 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 14:20:32.707190    2764 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 14:20:32.708258    2764 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 14:20:32.712097    2764 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 14:20:32.846277    2764 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 14:20:33.104086    2764 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 14:20:33.104450    2764 kubeadm.go:322] 
	I1025 14:20:33.104482    2764 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 14:20:33.104484    2764 kubeadm.go:322] 
	I1025 14:20:33.104518    2764 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 14:20:33.104519    2764 kubeadm.go:322] 
	I1025 14:20:33.104530    2764 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 14:20:33.104580    2764 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 14:20:33.104608    2764 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 14:20:33.104610    2764 kubeadm.go:322] 
	I1025 14:20:33.104638    2764 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 14:20:33.104639    2764 kubeadm.go:322] 
	I1025 14:20:33.104675    2764 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 14:20:33.104677    2764 kubeadm.go:322] 
	I1025 14:20:33.104702    2764 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 14:20:33.104745    2764 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 14:20:33.104777    2764 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 14:20:33.104778    2764 kubeadm.go:322] 
	I1025 14:20:33.104823    2764 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 14:20:33.104858    2764 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 14:20:33.104860    2764 kubeadm.go:322] 
	I1025 14:20:33.104907    2764 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8zsidf.z8er39l9b6eppy8p \
	I1025 14:20:33.104953    2764 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 \
	I1025 14:20:33.104962    2764 kubeadm.go:322] 	--control-plane 
	I1025 14:20:33.104964    2764 kubeadm.go:322] 
	I1025 14:20:33.105010    2764 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 14:20:33.105012    2764 kubeadm.go:322] 
	I1025 14:20:33.105050    2764 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8zsidf.z8er39l9b6eppy8p \
	I1025 14:20:33.105101    2764 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 
	I1025 14:20:33.105159    2764 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 14:20:33.105167    2764 cni.go:84] Creating CNI manager for ""
	I1025 14:20:33.105173    2764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:20:33.107877    2764 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 14:20:33.110927    2764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 14:20:33.114039    2764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 14:20:33.119399    2764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 14:20:33.119459    2764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:20:33.119462    2764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=image-069000 minikube.k8s.io/updated_at=2023_10_25T14_20_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:20:33.185216    2764 kubeadm.go:1081] duration metric: took 65.794875ms to wait for elevateKubeSystemPrivileges.
	I1025 14:20:33.185226    2764 ops.go:34] apiserver oom_adj: -16
	I1025 14:20:33.185296    2764 kubeadm.go:406] StartCluster complete in 7.3033065s
	I1025 14:20:33.185306    2764 settings.go:142] acquiring lock: {Name:mka8243895d2abf46689bcbcc2c73a1efa650151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:33.185381    2764 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:20:33.185747    2764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/kubeconfig: {Name:mkdc8e211286b196dbaba95cec2e4580798673af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:33.185961    2764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 14:20:33.185998    2764 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 14:20:33.186034    2764 addons.go:69] Setting storage-provisioner=true in profile "image-069000"
	I1025 14:20:33.186039    2764 addons.go:231] Setting addon storage-provisioner=true in "image-069000"
	I1025 14:20:33.186050    2764 config.go:182] Loaded profile config "image-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:20:33.186058    2764 host.go:66] Checking if "image-069000" exists ...
	I1025 14:20:33.186083    2764 addons.go:69] Setting default-storageclass=true in profile "image-069000"
	I1025 14:20:33.186111    2764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-069000"
	I1025 14:20:33.186311    2764 retry.go:31] will retry after 1.08067779s: connect: dial unix /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/monitor: connect: connection refused
	I1025 14:20:33.187199    2764 addons.go:231] Setting addon default-storageclass=true in "image-069000"
	I1025 14:20:33.187207    2764 host.go:66] Checking if "image-069000" exists ...
	I1025 14:20:33.187940    2764 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 14:20:33.187943    2764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 14:20:33.187948    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:33.196576    2764 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-069000" context rescaled to 1 replicas
	I1025 14:20:33.196589    2764 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:20:33.200478    2764 out.go:177] * Verifying Kubernetes components...
	I1025 14:20:33.207381    2764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 14:20:33.224214    2764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 14:20:33.224507    2764 api_server.go:52] waiting for apiserver process to appear ...
	I1025 14:20:33.224535    2764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 14:20:33.239289    2764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 14:20:33.569714    2764 start.go:926] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1025 14:20:33.569752    2764 api_server.go:72] duration metric: took 373.1525ms to wait for apiserver process to appear ...
	I1025 14:20:33.569758    2764 api_server.go:88] waiting for apiserver healthz status ...
	I1025 14:20:33.569766    2764 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I1025 14:20:33.573132    2764 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I1025 14:20:33.573788    2764 api_server.go:141] control plane version: v1.28.3
	I1025 14:20:33.573792    2764 api_server.go:131] duration metric: took 4.032ms to wait for apiserver health ...
	I1025 14:20:33.573795    2764 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 14:20:33.576714    2764 system_pods.go:59] 4 kube-system pods found
	I1025 14:20:33.576720    2764 system_pods.go:61] "etcd-image-069000" [9de0d043-8e54-43a3-b198-85591f1514c2] Pending
	I1025 14:20:33.576722    2764 system_pods.go:61] "kube-apiserver-image-069000" [329a9342-9236-44d0-816a-154a0d44f8b0] Pending
	I1025 14:20:33.576724    2764 system_pods.go:61] "kube-controller-manager-image-069000" [6c0984c0-107c-4084-971d-2cdae602495c] Pending
	I1025 14:20:33.576727    2764 system_pods.go:61] "kube-scheduler-image-069000" [50bcc63b-7abd-4f9e-a26a-e63090dc602c] Pending
	I1025 14:20:33.576728    2764 system_pods.go:74] duration metric: took 2.931833ms to wait for pod list to return data ...
	I1025 14:20:33.576735    2764 kubeadm.go:581] duration metric: took 380.134125ms to wait for : map[apiserver:true system_pods:true] ...
	I1025 14:20:33.576740    2764 node_conditions.go:102] verifying NodePressure condition ...
	I1025 14:20:33.578118    2764 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1025 14:20:33.578123    2764 node_conditions.go:123] node cpu capacity is 2
	I1025 14:20:33.578128    2764 node_conditions.go:105] duration metric: took 1.386334ms to run NodePressure ...
	I1025 14:20:33.578132    2764 start.go:228] waiting for startup goroutines ...
	I1025 14:20:34.273771    2764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:20:34.277772    2764 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:20:34.277776    2764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 14:20:34.277782    2764 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/image-069000/id_rsa Username:docker}
	I1025 14:20:34.316742    2764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:20:34.480664    2764 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 14:20:34.488537    2764 addons.go:502] enable addons completed in 1.302538916s: enabled=[default-storageclass storage-provisioner]
	I1025 14:20:34.488548    2764 start.go:233] waiting for cluster config update ...
	I1025 14:20:34.488552    2764 start.go:242] writing updated cluster config ...
	I1025 14:20:34.488784    2764 ssh_runner.go:195] Run: rm -f paused
	I1025 14:20:34.517485    2764 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 14:20:34.521593    2764 out.go:177] * Done! kubectl is now configured to use "image-069000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-25 21:20:16 UTC, ends at Wed 2023-10-25 21:20:36 UTC. --
	Oct 25 21:20:28 image-069000 cri-dockerd[1001]: time="2023-10-25T21:20:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/665bc6d707af047ef8c3b8b01856c75b91974196a85c9bbebad730a551c522a1/resolv.conf as [nameserver 192.168.105.1]"
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.001049506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.001144714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.001171173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.001205464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:29 image-069000 cri-dockerd[1001]: time="2023-10-25T21:20:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec52288732bd8bf44db2971e4df7c415670c6ddc6798ce85e6e6e78dfddd7be9/resolv.conf as [nameserver 192.168.105.1]"
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.074800881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.074829048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.074836339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.074840756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.089542506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.089660256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.089690923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:20:29 image-069000 dockerd[1114]: time="2023-10-25T21:20:29.089715298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:35 image-069000 dockerd[1108]: time="2023-10-25T21:20:35.913531968Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 25 21:20:36 image-069000 dockerd[1108]: time="2023-10-25T21:20:36.039005551Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 25 21:20:36 image-069000 dockerd[1108]: time="2023-10-25T21:20:36.054484259Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.092056718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.092280468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.092311218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.092342509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:20:36 image-069000 dockerd[1108]: time="2023-10-25T21:20:36.242352718Z" level=info msg="ignoring event" container=d0de61462cd58fd01fe242ae9235732bb6030028044608d6a3ac5d6673e74ba1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.242502051Z" level=info msg="shim disconnected" id=d0de61462cd58fd01fe242ae9235732bb6030028044608d6a3ac5d6673e74ba1 namespace=moby
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.242550009Z" level=warning msg="cleaning up after shim disconnected" id=d0de61462cd58fd01fe242ae9235732bb6030028044608d6a3ac5d6673e74ba1 namespace=moby
	Oct 25 21:20:36 image-069000 dockerd[1114]: time="2023-10-25T21:20:36.242554718Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac912d5aa97e3       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   ec52288732bd8       etcd-image-069000
	99f646aa531fa       8276439b4f237       7 seconds ago       Running             kube-controller-manager   0                   665bc6d707af0       kube-controller-manager-image-069000
	20cbcc8f21dce       42a4e73724daa       8 seconds ago       Running             kube-scheduler            0                   655264bd69621       kube-scheduler-image-069000
	34a42f1cbcf58       537e9a59ee2fd       8 seconds ago       Running             kube-apiserver            0                   a2f92ed3d2256       kube-apiserver-image-069000
	
	* 
	* ==> describe nodes <==
	* Name:               image-069000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-069000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=image-069000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T14_20_33_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:20:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-069000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:20:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:20:33 +0000   Wed, 25 Oct 2023 21:20:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:20:33 +0000   Wed, 25 Oct 2023 21:20:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:20:33 +0000   Wed, 25 Oct 2023 21:20:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 25 Oct 2023 21:20:33 +0000   Wed, 25 Oct 2023 21:20:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-069000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905016Ki
	  pods:               110
	System Info:
	  Machine ID:                 42e7ad826d1e4d229d2b6f0cc1e62bde
	  System UUID:                42e7ad826d1e4d229d2b6f0cc1e62bde
	  Boot ID:                    9c0f4d8e-bc93-46f5-9cc8-0132a22960a9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-069000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-069000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-069000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-069000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-069000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-069000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-069000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-069000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-069000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-069000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Oct25 21:20] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.648780] EINJ: EINJ table not found.
	[  +0.540900] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043461] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000940] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.218531] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.077618] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.456243] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.187462] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[  +0.080471] systemd-fstab-generator[719]: Ignoring "noauto" for root device
	[  +0.082773] systemd-fstab-generator[732]: Ignoring "noauto" for root device
	[  +1.152793] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.091605] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.076630] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.074600] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.074509] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.102435] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +2.586510] systemd-fstab-generator[1101]: Ignoring "noauto" for root device
	[  +3.681329] systemd-fstab-generator[1487]: Ignoring "noauto" for root device
	[  +0.273461] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.839300] systemd-fstab-generator[2388]: Ignoring "noauto" for root device
	[  +2.956771] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ac912d5aa97e] <==
	* {"level":"info","ts":"2023-10-25T21:20:29.302771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-10-25T21:20:29.30283Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-10-25T21:20:29.304194Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-25T21:20:29.306102Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-25T21:20:29.311709Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-25T21:20:29.311953Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-25T21:20:29.311963Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-25T21:20:29.857352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-25T21:20:29.857458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-25T21:20:29.85749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-10-25T21:20:29.857512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-10-25T21:20:29.857552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-25T21:20:29.857575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-10-25T21:20:29.857597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-25T21:20:29.858388Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:20:29.858737Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-069000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-25T21:20:29.858774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:20:29.85931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-25T21:20:29.85943Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-25T21:20:29.859852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-10-25T21:20:29.860108Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:20:29.860166Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:20:29.860188Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-25T21:20:29.883598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-25T21:20:29.883679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:20:36 up 0 min,  0 users,  load average: 0.40, 0.10, 0.03
	Linux image-069000 5.10.57 #1 SMP PREEMPT Mon Oct 16 17:34:05 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [34a42f1cbcf5] <==
	* I1025 21:20:30.527868       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:20:30.527701       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 21:20:30.528967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 21:20:30.529051       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:20:30.538777       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 21:20:30.544159       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","catch-all","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.545495       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","exempt","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.547633       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","exempt","catch-all","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.551313       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","exempt","catch-all","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.556293       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","exempt","catch-all","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.577317       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1025 21:20:30.588805       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	I1025 21:20:31.429904       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 21:20:31.431531       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 21:20:31.431541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 21:20:31.576093       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:20:31.586762       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 21:20:31.630726       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 21:20:31.633185       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I1025 21:20:31.633639       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 21:20:31.635584       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:20:32.472571       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 21:20:33.210322       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 21:20:33.213775       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 21:20:33.222029       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [99f646aa531f] <==
	* I1025 21:20:32.492931       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I1025 21:20:32.492938       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I1025 21:20:32.495611       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I1025 21:20:32.495663       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I1025 21:20:32.495666       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I1025 21:20:32.519829       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I1025 21:20:32.519856       1 disruption.go:437] "Sending events to api server."
	I1025 21:20:32.519877       1 disruption.go:448] "Starting disruption controller"
	I1025 21:20:32.519880       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I1025 21:20:32.568995       1 shared_informer.go:318] Caches are synced for tokens
	I1025 21:20:32.671400       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I1025 21:20:32.671447       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I1025 21:20:32.671454       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I1025 21:20:32.821160       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1025 21:20:32.821217       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I1025 21:20:32.821224       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I1025 21:20:32.971374       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I1025 21:20:32.971401       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I1025 21:20:32.971406       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I1025 21:20:33.121629       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I1025 21:20:33.121688       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I1025 21:20:33.121695       1 shared_informer.go:311] Waiting for caches to sync for service account
	I1025 21:20:33.271673       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I1025 21:20:33.272284       1 daemon_controller.go:291] "Starting daemon sets controller"
	I1025 21:20:33.272291       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	
	* 
	* ==> kube-scheduler [20cbcc8f21dc] <==
	* W1025 21:20:30.502377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:20:30.502770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 21:20:30.502446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:30.502777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 21:20:30.502480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:20:30.502784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 21:20:30.502502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:30.502790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 21:20:30.502523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:20:30.502797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 21:20:30.502544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:20:30.502803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 21:20:30.502572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:30.502809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 21:20:30.502837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:30.502846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:20:30.502862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:20:30.502870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 21:20:30.502892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:20:30.502896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 21:20:31.418496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:31.418517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 21:20:31.493920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:20:31.493942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1025 21:20:31.989985       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-25 21:20:16 UTC, ends at Wed 2023-10-25 21:20:36 UTC. --
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.359182    2406 kubelet_node_status.go:70] "Attempting to register node" node="image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.362132    2406 topology_manager.go:215] "Topology Admit Handler" podUID="589e86f4fc1c3f96c484af2faadf9634" podNamespace="kube-system" podName="etcd-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.362216    2406 topology_manager.go:215] "Topology Admit Handler" podUID="bf11e5caa5ce7df57d5041de8d5ccf9e" podNamespace="kube-system" podName="kube-apiserver-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.362255    2406 topology_manager.go:215] "Topology Admit Handler" podUID="7b8641a2505d0987ab0c08c77a172d39" podNamespace="kube-system" podName="kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.362272    2406 topology_manager.go:215] "Topology Admit Handler" podUID="17e0b76c0300b55a5b47e484396f7f9a" podNamespace="kube-system" podName="kube-scheduler-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.363824    2406 kubelet_node_status.go:108] "Node was previously registered" node="image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.363945    2406 kubelet_node_status.go:73] "Successfully registered node" node="image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557140    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b8641a2505d0987ab0c08c77a172d39-flexvolume-dir\") pod \"kube-controller-manager-image-069000\" (UID: \"7b8641a2505d0987ab0c08c77a172d39\") " pod="kube-system/kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557161    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b8641a2505d0987ab0c08c77a172d39-kubeconfig\") pod \"kube-controller-manager-image-069000\" (UID: \"7b8641a2505d0987ab0c08c77a172d39\") " pod="kube-system/kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557171    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf11e5caa5ce7df57d5041de8d5ccf9e-ca-certs\") pod \"kube-apiserver-image-069000\" (UID: \"bf11e5caa5ce7df57d5041de8d5ccf9e\") " pod="kube-system/kube-apiserver-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557181    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf11e5caa5ce7df57d5041de8d5ccf9e-usr-share-ca-certificates\") pod \"kube-apiserver-image-069000\" (UID: \"bf11e5caa5ce7df57d5041de8d5ccf9e\") " pod="kube-system/kube-apiserver-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557191    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf11e5caa5ce7df57d5041de8d5ccf9e-k8s-certs\") pod \"kube-apiserver-image-069000\" (UID: \"bf11e5caa5ce7df57d5041de8d5ccf9e\") " pod="kube-system/kube-apiserver-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557200    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8641a2505d0987ab0c08c77a172d39-ca-certs\") pod \"kube-controller-manager-image-069000\" (UID: \"7b8641a2505d0987ab0c08c77a172d39\") " pod="kube-system/kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557342    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8641a2505d0987ab0c08c77a172d39-k8s-certs\") pod \"kube-controller-manager-image-069000\" (UID: \"7b8641a2505d0987ab0c08c77a172d39\") " pod="kube-system/kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557353    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8641a2505d0987ab0c08c77a172d39-usr-share-ca-certificates\") pod \"kube-controller-manager-image-069000\" (UID: \"7b8641a2505d0987ab0c08c77a172d39\") " pod="kube-system/kube-controller-manager-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557361    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17e0b76c0300b55a5b47e484396f7f9a-kubeconfig\") pod \"kube-scheduler-image-069000\" (UID: \"17e0b76c0300b55a5b47e484396f7f9a\") " pod="kube-system/kube-scheduler-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557416    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/589e86f4fc1c3f96c484af2faadf9634-etcd-certs\") pod \"etcd-image-069000\" (UID: \"589e86f4fc1c3f96c484af2faadf9634\") " pod="kube-system/etcd-image-069000"
	Oct 25 21:20:33 image-069000 kubelet[2406]: I1025 21:20:33.557425    2406 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/589e86f4fc1c3f96c484af2faadf9634-etcd-data\") pod \"etcd-image-069000\" (UID: \"589e86f4fc1c3f96c484af2faadf9634\") " pod="kube-system/etcd-image-069000"
	Oct 25 21:20:34 image-069000 kubelet[2406]: I1025 21:20:34.243660    2406 apiserver.go:52] "Watching apiserver"
	Oct 25 21:20:34 image-069000 kubelet[2406]: I1025 21:20:34.257112    2406 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 25 21:20:34 image-069000 kubelet[2406]: E1025 21:20:34.316633    2406 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-069000\" already exists" pod="kube-system/kube-apiserver-image-069000"
	Oct 25 21:20:34 image-069000 kubelet[2406]: I1025 21:20:34.321104    2406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-069000" podStartSLOduration=1.3210633 podCreationTimestamp="2023-10-25 21:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:20:34.317794925 +0000 UTC m=+1.119473502" watchObservedRunningTime="2023-10-25 21:20:34.3210633 +0000 UTC m=+1.122741877"
	Oct 25 21:20:34 image-069000 kubelet[2406]: I1025 21:20:34.324537    2406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-069000" podStartSLOduration=1.324506592 podCreationTimestamp="2023-10-25 21:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:20:34.321183508 +0000 UTC m=+1.122862085" watchObservedRunningTime="2023-10-25 21:20:34.324506592 +0000 UTC m=+1.126185168"
	Oct 25 21:20:34 image-069000 kubelet[2406]: I1025 21:20:34.327787    2406 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-069000" podStartSLOduration=1.327747925 podCreationTimestamp="2023-10-25 21:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 21:20:34.324630675 +0000 UTC m=+1.126309210" watchObservedRunningTime="2023-10-25 21:20:34.327747925 +0000 UTC m=+1.129426460"
	Oct 25 21:20:36 image-069000 kubelet[2406]: I1025 21:20:36.831939    2406 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-069000 -n image-069000
helpers_test.go:261: (dbg) Run:  kubectl --context image-069000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-069000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-069000 describe pod storage-provisioner: exit status 1 (37.115834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-069000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (50.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-187000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-187000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.711391375s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-187000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-187000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2731b564-f919-420a-8abc-ce53087de252] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2731b564-f919-420a-8abc-ce53087de252] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.018410541s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-187000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E1025 14:22:36.791669    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.030363458s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons disable ingress-dns --alsologtostderr -v=1: (5.797043333s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons disable ingress --alsologtostderr -v=1: (7.104545416s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-187000 -n ingress-addon-legacy-187000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:20 PDT |
	|                | -p functional-260000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:19 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:19 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-260000 ssh pgrep              | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-260000 image build -t         | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | localhost/my-image:functional-260000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-260000 image ls               | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	| image          | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-260000                        | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-260000                     | functional-260000           | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	| start          | -p image-069000 --driver=qemu2           | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-069000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-069000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-069000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-069000                          |                             |         |         |                     |                     |
	| delete         | -p image-069000                          | image-069000                | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:20 PDT |
	| start          | -p ingress-addon-legacy-187000           | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:20 PDT | 25 Oct 23 14:21 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-187000              | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:21 PDT | 25 Oct 23 14:22 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-187000              | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:22 PDT | 25 Oct 23 14:22 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-187000              | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:22 PDT | 25 Oct 23 14:22 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-187000 ip           | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:22 PDT | 25 Oct 23 14:22 PDT |
	| addons         | ingress-addon-legacy-187000              | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:22 PDT | 25 Oct 23 14:22 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-187000              | ingress-addon-legacy-187000 | jenkins | v1.31.2 | 25 Oct 23 14:22 PDT | 25 Oct 23 14:22 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:20:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:20:37.071702    2812 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:20:37.071833    2812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:20:37.071837    2812 out.go:309] Setting ErrFile to fd 2...
	I1025 14:20:37.071840    2812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:20:37.071971    2812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:20:37.072972    2812 out.go:303] Setting JSON to false
	I1025 14:20:37.089447    2812 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1211,"bootTime":1698267626,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:20:37.089536    2812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:20:37.093890    2812 out.go:177] * [ingress-addon-legacy-187000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:20:37.105015    2812 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:20:37.100961    2812 notify.go:220] Checking for updates...
	I1025 14:20:37.110886    2812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:20:37.118884    2812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:20:37.122883    2812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:20:37.125853    2812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:20:37.128906    2812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:20:37.132029    2812 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:20:37.135864    2812 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:20:37.142895    2812 start.go:298] selected driver: qemu2
	I1025 14:20:37.142901    2812 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:20:37.142908    2812 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:20:37.145412    2812 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:20:37.147873    2812 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:20:37.151021    2812 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:20:37.151048    2812 cni.go:84] Creating CNI manager for ""
	I1025 14:20:37.151058    2812 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:20:37.151070    2812 start_flags.go:323] config:
	{Name:ingress-addon-legacy-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:20:37.156034    2812 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:20:37.161905    2812 out.go:177] * Starting control plane node ingress-addon-legacy-187000 in cluster ingress-addon-legacy-187000
	I1025 14:20:37.165884    2812 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 14:20:37.225330    2812 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1025 14:20:37.225342    2812 cache.go:56] Caching tarball of preloaded images
	I1025 14:20:37.225511    2812 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 14:20:37.232873    2812 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1025 14:20:37.240955    2812 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:20:37.320664    2812 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1025 14:20:43.561009    2812 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:20:43.561141    2812 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:20:44.314517    2812 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1025 14:20:44.314708    2812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/config.json ...
	I1025 14:20:44.314723    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/config.json: {Name:mk61134d52b308ab83f307b7a0de2612728339d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:20:44.314972    2812 start.go:365] acquiring machines lock for ingress-addon-legacy-187000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:20:44.315007    2812 start.go:369] acquired machines lock for "ingress-addon-legacy-187000" in 22.291µs
	I1025 14:20:44.315019    2812 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:20:44.315056    2812 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:20:44.323057    2812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1025 14:20:44.338328    2812 start.go:159] libmachine.API.Create for "ingress-addon-legacy-187000" (driver="qemu2")
	I1025 14:20:44.338358    2812 client.go:168] LocalClient.Create starting
	I1025 14:20:44.338433    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:20:44.338459    2812 main.go:141] libmachine: Decoding PEM data...
	I1025 14:20:44.338468    2812 main.go:141] libmachine: Parsing certificate...
	I1025 14:20:44.338505    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:20:44.338523    2812 main.go:141] libmachine: Decoding PEM data...
	I1025 14:20:44.338533    2812 main.go:141] libmachine: Parsing certificate...
	I1025 14:20:44.338864    2812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:20:44.468800    2812 main.go:141] libmachine: Creating SSH key...
	I1025 14:20:44.644528    2812 main.go:141] libmachine: Creating Disk image...
	I1025 14:20:44.644536    2812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:20:44.644725    2812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2
	I1025 14:20:44.657230    2812 main.go:141] libmachine: STDOUT: 
	I1025 14:20:44.657246    2812 main.go:141] libmachine: STDERR: 
	I1025 14:20:44.657304    2812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2 +20000M
	I1025 14:20:44.667953    2812 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:20:44.667973    2812 main.go:141] libmachine: STDERR: 
	I1025 14:20:44.667993    2812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2
	I1025 14:20:44.667998    2812 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:20:44.668031    2812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:0d:dc:cc:35:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/disk.qcow2
	I1025 14:20:44.704058    2812 main.go:141] libmachine: STDOUT: 
	I1025 14:20:44.704095    2812 main.go:141] libmachine: STDERR: 
	I1025 14:20:44.704099    2812 main.go:141] libmachine: Attempt 0
	I1025 14:20:44.704122    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:44.704192    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:44.704212    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:44.704219    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:44.704225    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:44.704231    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:46.706389    2812 main.go:141] libmachine: Attempt 1
	I1025 14:20:46.706494    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:46.706839    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:46.706893    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:46.706964    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:46.706997    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:46.707031    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:48.709258    2812 main.go:141] libmachine: Attempt 2
	I1025 14:20:48.709350    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:48.709624    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:48.709673    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:48.709705    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:48.709736    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:48.709769    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:50.711946    2812 main.go:141] libmachine: Attempt 3
	I1025 14:20:50.711998    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:50.712094    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:50.712106    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:50.712112    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:50.712117    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:50.712122    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:52.714192    2812 main.go:141] libmachine: Attempt 4
	I1025 14:20:52.714221    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:52.714267    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:52.714276    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:52.714282    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:52.714286    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:52.714291    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:54.716331    2812 main.go:141] libmachine: Attempt 5
	I1025 14:20:54.716343    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:54.716379    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:54.716391    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:54.716398    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:54.716404    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:54.716409    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:56.718470    2812 main.go:141] libmachine: Attempt 6
	I1025 14:20:56.718517    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:56.718584    2812 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 14:20:56.718595    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d6:d0:5d:a9:ae:28 ID:1,d6:d0:5d:a9:ae:28 Lease:0x653ad810}
	I1025 14:20:56.718600    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:1e:27:9f:ef:60:cc ID:1,1e:27:9f:ef:60:cc Lease:0x653ad752}
	I1025 14:20:56.718610    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:5e:f1:43:8d:26:69 ID:1,5e:f1:43:8d:26:69 Lease:0x653985c1}
	I1025 14:20:56.718616    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a6:d6:65:c:16:f0 ID:1,a6:d6:65:c:16:f0 Lease:0x65398536}
	I1025 14:20:58.720704    2812 main.go:141] libmachine: Attempt 7
	I1025 14:20:58.720728    2812 main.go:141] libmachine: Searching for f6:d:dc:cc:35:11 in /var/db/dhcpd_leases ...
	I1025 14:20:58.720846    2812 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 14:20:58.720858    2812 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:f6:d:dc:cc:35:11 ID:1,f6:d:dc:cc:35:11 Lease:0x653ad839}
	I1025 14:20:58.720862    2812 main.go:141] libmachine: Found match: f6:d:dc:cc:35:11
	I1025 14:20:58.720871    2812 main.go:141] libmachine: IP: 192.168.105.6
	I1025 14:20:58.720877    2812 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1025 14:21:00.740531    2812 machine.go:88] provisioning docker machine ...
	I1025 14:21:00.740596    2812 buildroot.go:166] provisioning hostname "ingress-addon-legacy-187000"
	I1025 14:21:00.740737    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:00.741403    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:00.741431    2812 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-187000 && echo "ingress-addon-legacy-187000" | sudo tee /etc/hostname
	I1025 14:21:00.822864    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-187000
	
	I1025 14:21:00.822981    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:00.823419    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:00.823436    2812 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-187000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-187000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-187000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 14:21:00.885933    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 14:21:00.885954    2812 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-1304/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-1304/.minikube}
	I1025 14:21:00.885968    2812 buildroot.go:174] setting up certificates
	I1025 14:21:00.885977    2812 provision.go:83] configureAuth start
	I1025 14:21:00.885983    2812 provision.go:138] copyHostCerts
	I1025 14:21:00.886023    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem
	I1025 14:21:00.886103    2812 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem, removing ...
	I1025 14:21:00.886120    2812 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem
	I1025 14:21:00.886379    2812 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.pem (1082 bytes)
	I1025 14:21:00.886632    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem
	I1025 14:21:00.886661    2812 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem, removing ...
	I1025 14:21:00.886665    2812 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem
	I1025 14:21:00.886741    2812 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/cert.pem (1123 bytes)
	I1025 14:21:00.886879    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem
	I1025 14:21:00.886908    2812 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem, removing ...
	I1025 14:21:00.886912    2812 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem
	I1025 14:21:00.886984    2812 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-1304/.minikube/key.pem (1675 bytes)
	I1025 14:21:00.887115    2812 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-187000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-187000]
	I1025 14:21:01.081987    2812 provision.go:172] copyRemoteCerts
	I1025 14:21:01.082041    2812 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 14:21:01.082053    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:01.111042    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 14:21:01.111090    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 14:21:01.118073    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 14:21:01.118116    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 14:21:01.124787    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 14:21:01.124816    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1025 14:21:01.131898    2812 provision.go:86] duration metric: configureAuth took 245.915917ms
	I1025 14:21:01.131906    2812 buildroot.go:189] setting minikube options for container-runtime
	I1025 14:21:01.131996    2812 config.go:182] Loaded profile config "ingress-addon-legacy-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 14:21:01.132032    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:01.132243    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:01.132248    2812 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 14:21:01.181478    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 14:21:01.181486    2812 buildroot.go:70] root file system type: tmpfs
	I1025 14:21:01.181536    2812 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 14:21:01.181581    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:01.181812    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:01.181848    2812 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 14:21:01.239659    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 14:21:01.239700    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:01.239941    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:01.239951    2812 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 14:21:01.585633    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 14:21:01.585646    2812 machine.go:91] provisioned docker machine in 845.084292ms
	I1025 14:21:01.585651    2812 client.go:171] LocalClient.Create took 17.247286208s
	I1025 14:21:01.585668    2812 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-187000" took 17.24733525s
	I1025 14:21:01.585676    2812 start.go:300] post-start starting for "ingress-addon-legacy-187000" (driver="qemu2")
	I1025 14:21:01.585681    2812 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 14:21:01.585757    2812 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 14:21:01.585770    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:01.612609    2812 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 14:21:01.614023    2812 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 14:21:01.614030    2812 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/addons for local assets ...
	I1025 14:21:01.614107    2812 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-1304/.minikube/files for local assets ...
	I1025 14:21:01.614204    2812 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem -> 17232.pem in /etc/ssl/certs
	I1025 14:21:01.614209    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem -> /etc/ssl/certs/17232.pem
	I1025 14:21:01.614316    2812 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 14:21:01.616837    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem --> /etc/ssl/certs/17232.pem (1708 bytes)
	I1025 14:21:01.624047    2812 start.go:303] post-start completed in 38.36575ms
	I1025 14:21:01.624398    2812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/config.json ...
	I1025 14:21:01.624560    2812 start.go:128] duration metric: createHost completed in 17.3094965s
	I1025 14:21:01.624584    2812 main.go:141] libmachine: Using SSH client type: native
	I1025 14:21:01.624802    2812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10536b0c0] 0x10536d830 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1025 14:21:01.624807    2812 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1025 14:21:01.674349    2812 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698268861.659336669
	
	I1025 14:21:01.674360    2812 fix.go:206] guest clock: 1698268861.659336669
	I1025 14:21:01.674364    2812 fix.go:219] Guest: 2023-10-25 14:21:01.659336669 -0700 PDT Remote: 2023-10-25 14:21:01.624563 -0700 PDT m=+24.575522292 (delta=34.773669ms)
	I1025 14:21:01.674373    2812 fix.go:190] guest clock delta is within tolerance: 34.773669ms
	I1025 14:21:01.674375    2812 start.go:83] releasing machines lock for "ingress-addon-legacy-187000", held for 17.359361125s
	I1025 14:21:01.674626    2812 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 14:21:01.674626    2812 ssh_runner.go:195] Run: cat /version.json
	I1025 14:21:01.674646    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:01.674653    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:01.703751    2812 ssh_runner.go:195] Run: systemctl --version
	I1025 14:21:01.749279    2812 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 14:21:01.751226    2812 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 14:21:01.751257    2812 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 14:21:01.754911    2812 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 14:21:01.760213    2812 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 14:21:01.760222    2812 start.go:472] detecting cgroup driver to use...
	I1025 14:21:01.760296    2812 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:21:01.767701    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1025 14:21:01.770915    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 14:21:01.774301    2812 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 14:21:01.774333    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 14:21:01.777263    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:21:01.780222    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 14:21:01.783807    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 14:21:01.787533    2812 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 14:21:01.791158    2812 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 14:21:01.794438    2812 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 14:21:01.797089    2812 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 14:21:01.800211    2812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:21:01.879187    2812 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 14:21:01.885777    2812 start.go:472] detecting cgroup driver to use...
	I1025 14:21:01.885855    2812 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 14:21:01.891316    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:21:01.896240    2812 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 14:21:01.902112    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 14:21:01.906706    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:21:01.911109    2812 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 14:21:01.951671    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 14:21:01.956960    2812 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 14:21:01.962433    2812 ssh_runner.go:195] Run: which cri-dockerd
	I1025 14:21:01.963803    2812 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 14:21:01.966878    2812 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 14:21:01.972258    2812 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 14:21:02.059195    2812 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 14:21:02.134807    2812 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 14:21:02.134868    2812 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 14:21:02.139990    2812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:21:02.212788    2812 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:21:03.375085    2812 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162279125s)
	I1025 14:21:03.375167    2812 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:21:03.385035    2812 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 14:21:03.400643    2812 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1025 14:21:03.400761    2812 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1025 14:21:03.402247    2812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:21:03.406069    2812 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 14:21:03.406114    2812 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:21:03.411636    2812 docker.go:693] Got preloaded images: 
	I1025 14:21:03.411647    2812 docker.go:699] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1025 14:21:03.411681    2812 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:21:03.414692    2812 ssh_runner.go:195] Run: which lz4
	I1025 14:21:03.415795    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1025 14:21:03.415875    2812 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1025 14:21:03.417097    2812 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 14:21:03.417107    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1025 14:21:05.100039    2812 docker.go:657] Took 1.684180 seconds to copy over tarball
	I1025 14:21:05.100100    2812 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 14:21:06.400541    2812 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.30042325s)
	I1025 14:21:06.400554    2812 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 14:21:06.428257    2812 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 14:21:06.432157    2812 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1025 14:21:06.438573    2812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 14:21:06.522377    2812 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 14:21:08.131524    2812 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.609130458s)
	I1025 14:21:08.131620    2812 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 14:21:08.137700    2812 docker.go:693] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1025 14:21:08.137714    2812 docker.go:699] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1025 14:21:08.137718    2812 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 14:21:08.147605    2812 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 14:21:08.147624    2812 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 14:21:08.147786    2812 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 14:21:08.147813    2812 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 14:21:08.148079    2812 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:08.148269    2812 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1025 14:21:08.148676    2812 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1025 14:21:08.149242    2812 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 14:21:08.159397    2812 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:08.159403    2812 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 14:21:08.159463    2812 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 14:21:08.159485    2812 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1025 14:21:08.159575    2812 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 14:21:08.159590    2812 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 14:21:08.159618    2812 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 14:21:08.160650    2812 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 14:21:08.767388    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 14:21:08.773911    2812 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1025 14:21:08.773935    2812 docker.go:318] Removing image: registry.k8s.io/pause:3.2
	I1025 14:21:08.773977    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1025 14:21:08.780440    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1025 14:21:09.069727    2812 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.069867    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 14:21:09.080421    2812 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1025 14:21:09.080448    2812 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 14:21:09.080487    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 14:21:09.086386    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W1025 14:21:09.260005    2812 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.260093    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:09.266775    2812 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 14:21:09.266804    2812 docker.go:318] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:09.266849    2812 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:09.277414    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W1025 14:21:09.306209    2812 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.306302    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1025 14:21:09.312521    2812 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1025 14:21:09.312545    2812 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.7
	I1025 14:21:09.312622    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1025 14:21:09.319005    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W1025 14:21:09.500753    2812 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.500867    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1025 14:21:09.507064    2812 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1025 14:21:09.507089    2812 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 14:21:09.507141    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1025 14:21:09.513311    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W1025 14:21:09.710427    2812 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.710574    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1025 14:21:09.717173    2812 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1025 14:21:09.717199    2812 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 14:21:09.717241    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1025 14:21:09.723442    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W1025 14:21:09.942419    2812 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:09.942549    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1025 14:21:09.948859    2812 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1025 14:21:09.948884    2812 docker.go:318] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1025 14:21:09.948935    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1025 14:21:09.962305    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W1025 14:21:10.144140    2812 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1025 14:21:10.144451    2812 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1025 14:21:10.160129    2812 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1025 14:21:10.160169    2812 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 14:21:10.160240    2812 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1025 14:21:10.178943    2812 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1025 14:21:10.179001    2812 cache_images.go:92] LoadImages completed in 2.041271875s
	W1025 14:21:10.179065    2812 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1025 14:21:10.179194    2812 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 14:21:10.193608    2812 cni.go:84] Creating CNI manager for ""
	I1025 14:21:10.193627    2812 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:21:10.193640    2812 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 14:21:10.193659    2812 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-187000 NodeName:ingress-addon-legacy-187000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 14:21:10.193775    2812 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-187000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 14:21:10.193840    2812 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-187000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 14:21:10.193909    2812 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1025 14:21:10.198430    2812 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 14:21:10.198485    2812 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 14:21:10.202202    2812 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I1025 14:21:10.208391    2812 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1025 14:21:10.214468    2812 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I1025 14:21:10.220219    2812 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I1025 14:21:10.221655    2812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 14:21:10.225221    2812 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000 for IP: 192.168.105.6
	I1025 14:21:10.225230    2812 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b24ebfb6727e8dcf7d0828ec4a3e725ccc80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.225349    2812 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key
	I1025 14:21:10.225386    2812 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key
	I1025 14:21:10.225413    2812 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key
	I1025 14:21:10.225421    2812 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt with IP's: []
	I1025 14:21:10.334159    2812 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt ...
	I1025 14:21:10.334166    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: {Name:mkaa31f6f13bddcffa71619845adf2362e83782f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.334423    2812 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key ...
	I1025 14:21:10.334427    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key: {Name:mkd5b3b343442c9ed5dcd5eac4710b11ce147760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.334558    2812 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key.b354f644
	I1025 14:21:10.334565    2812 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 14:21:10.543041    2812 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt.b354f644 ...
	I1025 14:21:10.543046    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt.b354f644: {Name:mkce31a65cb1a8ef6a05ccde50836a3ac59d8f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.543250    2812 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key.b354f644 ...
	I1025 14:21:10.543256    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key.b354f644: {Name:mk410285336af64b82931b1ef1ec943cb64532f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.543363    2812 certs.go:337] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt
	I1025 14:21:10.543630    2812 certs.go:341] copying /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key
	I1025 14:21:10.543739    2812 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.key
	I1025 14:21:10.543747    2812 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.crt with IP's: []
	I1025 14:21:10.773559    2812 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.crt ...
	I1025 14:21:10.773566    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.crt: {Name:mke9c1dda58b50f5b8e6b55701d35602d63dffed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.773796    2812 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.key ...
	I1025 14:21:10.773801    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.key: {Name:mk2fa63b406ca037568f06aef4f93947ab2309cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:10.773926    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 14:21:10.773957    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 14:21:10.773968    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 14:21:10.773978    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 14:21:10.773988    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 14:21:10.773999    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 14:21:10.774009    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 14:21:10.774019    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 14:21:10.774110    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723.pem (1338 bytes)
	W1025 14:21:10.774143    2812 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723_empty.pem, impossibly tiny 0 bytes
	I1025 14:21:10.774149    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 14:21:10.774171    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem (1082 bytes)
	I1025 14:21:10.774187    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem (1123 bytes)
	I1025 14:21:10.774206    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/certs/key.pem (1675 bytes)
	I1025 14:21:10.774244    2812 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem (1708 bytes)
	I1025 14:21:10.774270    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:21:10.774280    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723.pem -> /usr/share/ca-certificates/1723.pem
	I1025 14:21:10.774293    2812 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem -> /usr/share/ca-certificates/17232.pem
	I1025 14:21:10.774638    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 14:21:10.783088    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 14:21:10.790475    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 14:21:10.797360    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 14:21:10.804282    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 14:21:10.811666    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 14:21:10.818793    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 14:21:10.825671    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 14:21:10.832499    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 14:21:10.839784    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/1723.pem --> /usr/share/ca-certificates/1723.pem (1338 bytes)
	I1025 14:21:10.847216    2812 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/ssl/certs/17232.pem --> /usr/share/ca-certificates/17232.pem (1708 bytes)
	I1025 14:21:10.854291    2812 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 14:21:10.859370    2812 ssh_runner.go:195] Run: openssl version
	I1025 14:21:10.861565    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 14:21:10.864658    2812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:21:10.866204    2812 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:21:10.866219    2812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 14:21:10.868005    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 14:21:10.871395    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1723.pem && ln -fs /usr/share/ca-certificates/1723.pem /etc/ssl/certs/1723.pem"
	I1025 14:21:10.874534    2812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1723.pem
	I1025 14:21:10.876016    2812 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:16 /usr/share/ca-certificates/1723.pem
	I1025 14:21:10.876041    2812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1723.pem
	I1025 14:21:10.878149    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1723.pem /etc/ssl/certs/51391683.0"
	I1025 14:21:10.881017    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17232.pem && ln -fs /usr/share/ca-certificates/17232.pem /etc/ssl/certs/17232.pem"
	I1025 14:21:10.884262    2812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17232.pem
	I1025 14:21:10.885798    2812 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:16 /usr/share/ca-certificates/17232.pem
	I1025 14:21:10.885816    2812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17232.pem
	I1025 14:21:10.887551    2812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17232.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 14:21:10.890712    2812 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 14:21:10.892011    2812 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 14:21:10.892039    2812 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-187000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:21:10.892095    2812 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 14:21:10.897688    2812 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 14:21:10.900562    2812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 14:21:10.903896    2812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 14:21:10.907193    2812 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 14:21:10.907206    2812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1025 14:21:10.931496    2812 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1025 14:21:10.931665    2812 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 14:21:11.013453    2812 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 14:21:11.013508    2812 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 14:21:11.013573    2812 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 14:21:11.059986    2812 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 14:21:11.060061    2812 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 14:21:11.060082    2812 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 14:21:11.151793    2812 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 14:21:11.156022    2812 out.go:204]   - Generating certificates and keys ...
	I1025 14:21:11.156060    2812 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 14:21:11.156116    2812 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 14:21:11.259564    2812 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 14:21:11.355752    2812 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 14:21:11.478403    2812 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 14:21:11.648410    2812 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 14:21:11.788142    2812 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 14:21:11.788214    2812 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-187000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1025 14:21:11.882117    2812 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 14:21:11.882181    2812 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-187000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1025 14:21:11.920901    2812 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 14:21:11.998243    2812 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 14:21:12.086200    2812 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 14:21:12.086229    2812 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 14:21:12.261985    2812 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 14:21:12.351109    2812 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 14:21:12.548771    2812 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 14:21:12.715512    2812 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 14:21:12.715690    2812 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 14:21:12.719013    2812 out.go:204]   - Booting up control plane ...
	I1025 14:21:12.719097    2812 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 14:21:12.722695    2812 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 14:21:12.723175    2812 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 14:21:12.723581    2812 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 14:21:12.724820    2812 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 14:21:24.231006    2812 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.505415 seconds
	I1025 14:21:24.231237    2812 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 14:21:24.252444    2812 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 14:21:24.768179    2812 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 14:21:24.768278    2812 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-187000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1025 14:21:25.274596    2812 kubeadm.go:322] [bootstrap-token] Using token: 244njf.i2u67q05895yvimk
	I1025 14:21:25.278954    2812 out.go:204]   - Configuring RBAC rules ...
	I1025 14:21:25.279057    2812 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 14:21:25.283887    2812 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 14:21:25.289153    2812 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 14:21:25.290719    2812 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 14:21:25.292136    2812 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 14:21:25.293599    2812 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 14:21:25.297926    2812 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 14:21:25.468090    2812 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 14:21:25.685205    2812 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 14:21:25.686003    2812 kubeadm.go:322] 
	I1025 14:21:25.686050    2812 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 14:21:25.686057    2812 kubeadm.go:322] 
	I1025 14:21:25.686108    2812 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 14:21:25.686111    2812 kubeadm.go:322] 
	I1025 14:21:25.686160    2812 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 14:21:25.686203    2812 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 14:21:25.686242    2812 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 14:21:25.686245    2812 kubeadm.go:322] 
	I1025 14:21:25.686274    2812 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 14:21:25.686326    2812 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 14:21:25.686377    2812 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 14:21:25.686381    2812 kubeadm.go:322] 
	I1025 14:21:25.686443    2812 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 14:21:25.686496    2812 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 14:21:25.686501    2812 kubeadm.go:322] 
	I1025 14:21:25.686553    2812 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 244njf.i2u67q05895yvimk \
	I1025 14:21:25.686636    2812 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 \
	I1025 14:21:25.686653    2812 kubeadm.go:322]     --control-plane 
	I1025 14:21:25.686656    2812 kubeadm.go:322] 
	I1025 14:21:25.686736    2812 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 14:21:25.686745    2812 kubeadm.go:322] 
	I1025 14:21:25.686802    2812 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 244njf.i2u67q05895yvimk \
	I1025 14:21:25.686910    2812 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f9ed8a6c1ae5e44374807bc7f35db343f3de11d7a52de7496b63e5c8e8e1eaf6 
	I1025 14:21:25.687028    2812 kubeadm.go:322] W1025 21:21:10.916352    1415 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 14:21:25.687157    2812 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 14:21:25.687237    2812 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1025 14:21:25.687302    2812 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 14:21:25.687411    2812 kubeadm.go:322] W1025 21:21:12.707765    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 14:21:25.687496    2812 kubeadm.go:322] W1025 21:21:12.708334    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 14:21:25.687503    2812 cni.go:84] Creating CNI manager for ""
	I1025 14:21:25.687511    2812 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:21:25.687525    2812 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 14:21:25.687602    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:25.687625    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=ingress-addon-legacy-187000 minikube.k8s.io/updated_at=2023_10_25T14_21_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:25.757820    2812 ops.go:34] apiserver oom_adj: -16
	I1025 14:21:25.757868    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:25.792384    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:26.328721    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:26.826946    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:27.327784    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:27.828694    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:28.328554    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:28.828747    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:29.328578    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:29.828596    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:30.328727    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:30.828540    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:31.328681    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:31.828501    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:32.328671    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:32.828731    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:33.328698    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:33.828496    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:34.328723    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:34.828463    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:35.328737    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:35.828701    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:36.328727    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:36.828704    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:37.328719    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:37.828699    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:38.328709    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:38.828569    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:39.328763    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:39.828719    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:40.328720    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:40.828449    2812 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 14:21:40.866840    2812 kubeadm.go:1081] duration metric: took 15.17929825s to wait for elevateKubeSystemPrivileges.
	I1025 14:21:40.866855    2812 kubeadm.go:406] StartCluster complete in 29.9748105s
	I1025 14:21:40.866875    2812 settings.go:142] acquiring lock: {Name:mka8243895d2abf46689bcbcc2c73a1efa650151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:40.866971    2812 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:21:40.867358    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/kubeconfig: {Name:mkdc8e211286b196dbaba95cec2e4580798673af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:21:40.867549    2812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 14:21:40.867560    2812 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 14:21:40.867599    2812 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-187000"
	I1025 14:21:40.867606    2812 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-187000"
	I1025 14:21:40.867613    2812 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-187000"
	I1025 14:21:40.867624    2812 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-187000"
	I1025 14:21:40.867632    2812 host.go:66] Checking if "ingress-addon-legacy-187000" exists ...
	I1025 14:21:40.867797    2812 config.go:182] Loaded profile config "ingress-addon-legacy-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 14:21:40.867826    2812 kapi.go:59] client config for ingress-addon-legacy-187000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10665a9d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 14:21:40.868241    2812 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 14:21:40.868791    2812 kapi.go:59] client config for ingress-addon-legacy-187000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10665a9d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 14:21:40.868895    2812 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-187000"
	I1025 14:21:40.868905    2812 host.go:66] Checking if "ingress-addon-legacy-187000" exists ...
	I1025 14:21:40.876483    2812 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:21:40.869622    2812 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 14:21:40.876495    2812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 14:21:40.880529    2812 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:21:40.880535    2812 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 14:21:40.880546    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:40.880549    2812 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/ingress-addon-legacy-187000/id_rsa Username:docker}
	I1025 14:21:40.883389    2812 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-187000" context rescaled to 1 replicas
	I1025 14:21:40.883405    2812 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:21:40.887471    2812 out.go:177] * Verifying Kubernetes components...
	I1025 14:21:40.895505    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 14:21:40.926284    2812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 14:21:40.926558    2812 kapi.go:59] client config for ingress-addon-legacy-187000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-1304/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10665a9d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 14:21:40.926734    2812 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-187000" to be "Ready" ...
	I1025 14:21:40.928106    2812 node_ready.go:49] node "ingress-addon-legacy-187000" has status "Ready":"True"
	I1025 14:21:40.928112    2812 node_ready.go:38] duration metric: took 1.372292ms waiting for node "ingress-addon-legacy-187000" to be "Ready" ...
	I1025 14:21:40.928117    2812 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 14:21:40.931283    2812 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-ndztc" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:40.960203    2812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 14:21:40.968324    2812 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 14:21:41.186574    2812 start.go:926] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1025 14:21:41.259653    2812 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 14:21:41.263646    2812 addons.go:502] enable addons completed in 396.084416ms: enabled=[storage-provisioner default-storageclass]
	I1025 14:21:42.935406    2812 pod_ready.go:102] pod "coredns-66bff467f8-ndztc" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-25 14:21:40 -0700 PDT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1025 14:21:44.950241    2812 pod_ready.go:102] pod "coredns-66bff467f8-ndztc" in "kube-system" namespace has status "Ready":"False"
	I1025 14:21:46.448693    2812 pod_ready.go:92] pod "coredns-66bff467f8-ndztc" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.448734    2812 pod_ready.go:81] duration metric: took 5.517438375s waiting for pod "coredns-66bff467f8-ndztc" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.448751    2812 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.455582    2812 pod_ready.go:92] pod "etcd-ingress-addon-legacy-187000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.455603    2812 pod_ready.go:81] duration metric: took 6.840875ms waiting for pod "etcd-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.455615    2812 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.461130    2812 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-187000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.461153    2812 pod_ready.go:81] duration metric: took 5.528583ms waiting for pod "kube-apiserver-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.461164    2812 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.466714    2812 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-187000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.466732    2812 pod_ready.go:81] duration metric: took 5.558542ms waiting for pod "kube-controller-manager-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.466749    2812 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zgzl2" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.474093    2812 pod_ready.go:92] pod "kube-proxy-zgzl2" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.474107    2812 pod_ready.go:81] duration metric: took 7.348875ms waiting for pod "kube-proxy-zgzl2" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.474115    2812 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.637849    2812 request.go:629] Waited for 163.605417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-187000
	I1025 14:21:46.837835    2812 request.go:629] Waited for 192.084042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-187000
	I1025 14:21:46.845991    2812 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-187000" in "kube-system" namespace has status "Ready":"True"
	I1025 14:21:46.846026    2812 pod_ready.go:81] duration metric: took 371.898584ms waiting for pod "kube-scheduler-ingress-addon-legacy-187000" in "kube-system" namespace to be "Ready" ...
	I1025 14:21:46.846051    2812 pod_ready.go:38] duration metric: took 5.917922917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 14:21:46.846105    2812 api_server.go:52] waiting for apiserver process to appear ...
	I1025 14:21:46.846415    2812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 14:21:46.865378    2812 api_server.go:72] duration metric: took 5.981951417s to wait for apiserver process to appear ...
	I1025 14:21:46.865399    2812 api_server.go:88] waiting for apiserver healthz status ...
	I1025 14:21:46.865416    2812 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I1025 14:21:46.874266    2812 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I1025 14:21:46.875273    2812 api_server.go:141] control plane version: v1.18.20
	I1025 14:21:46.875292    2812 api_server.go:131] duration metric: took 9.885459ms to wait for apiserver health ...
	I1025 14:21:46.875300    2812 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 14:21:47.037877    2812 request.go:629] Waited for 162.423292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1025 14:21:47.051645    2812 system_pods.go:59] 7 kube-system pods found
	I1025 14:21:47.051700    2812 system_pods.go:61] "coredns-66bff467f8-ndztc" [ab052388-8b95-4a58-bac8-c6ff32744223] Running
	I1025 14:21:47.051711    2812 system_pods.go:61] "etcd-ingress-addon-legacy-187000" [4a850d6f-fbf6-41da-8556-def4edfe4048] Running
	I1025 14:21:47.051723    2812 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-187000" [e58b3d7f-af72-4fcf-be19-4a6af4ae7b80] Running
	I1025 14:21:47.051735    2812 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-187000" [99054276-cb7e-4611-9aac-0b0d012f9973] Running
	I1025 14:21:47.051746    2812 system_pods.go:61] "kube-proxy-zgzl2" [75c1a762-d539-4d2a-8102-91b0cc388597] Running
	I1025 14:21:47.051755    2812 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-187000" [b9b8bc53-b306-4336-b346-aba9d27a793e] Running
	I1025 14:21:47.051767    2812 system_pods.go:61] "storage-provisioner" [b07e203b-b26c-423f-95ad-b55a5cebf3b1] Running
	I1025 14:21:47.051781    2812 system_pods.go:74] duration metric: took 176.470458ms to wait for pod list to return data ...
	I1025 14:21:47.051797    2812 default_sa.go:34] waiting for default service account to be created ...
	I1025 14:21:47.237868    2812 request.go:629] Waited for 185.869292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I1025 14:21:47.245470    2812 default_sa.go:45] found service account: "default"
	I1025 14:21:47.245506    2812 default_sa.go:55] duration metric: took 193.695958ms for default service account to be created ...
	I1025 14:21:47.245523    2812 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 14:21:47.437888    2812 request.go:629] Waited for 192.192916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1025 14:21:47.450422    2812 system_pods.go:86] 7 kube-system pods found
	I1025 14:21:47.450470    2812 system_pods.go:89] "coredns-66bff467f8-ndztc" [ab052388-8b95-4a58-bac8-c6ff32744223] Running
	I1025 14:21:47.450480    2812 system_pods.go:89] "etcd-ingress-addon-legacy-187000" [4a850d6f-fbf6-41da-8556-def4edfe4048] Running
	I1025 14:21:47.450488    2812 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-187000" [e58b3d7f-af72-4fcf-be19-4a6af4ae7b80] Running
	I1025 14:21:47.450497    2812 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-187000" [99054276-cb7e-4611-9aac-0b0d012f9973] Running
	I1025 14:21:47.450504    2812 system_pods.go:89] "kube-proxy-zgzl2" [75c1a762-d539-4d2a-8102-91b0cc388597] Running
	I1025 14:21:47.450515    2812 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-187000" [b9b8bc53-b306-4336-b346-aba9d27a793e] Running
	I1025 14:21:47.450524    2812 system_pods.go:89] "storage-provisioner" [b07e203b-b26c-423f-95ad-b55a5cebf3b1] Running
	I1025 14:21:47.450536    2812 system_pods.go:126] duration metric: took 205.004416ms to wait for k8s-apps to be running ...
	I1025 14:21:47.450550    2812 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 14:21:47.450740    2812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 14:21:47.466312    2812 system_svc.go:56] duration metric: took 15.75325ms WaitForService to wait for kubelet.
	I1025 14:21:47.466338    2812 kubeadm.go:581] duration metric: took 6.582916083s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 14:21:47.466362    2812 node_conditions.go:102] verifying NodePressure condition ...
	I1025 14:21:47.637831    2812 request.go:629] Waited for 171.342541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I1025 14:21:47.646863    2812 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1025 14:21:47.646911    2812 node_conditions.go:123] node cpu capacity is 2
	I1025 14:21:47.646944    2812 node_conditions.go:105] duration metric: took 180.569458ms to run NodePressure ...
	I1025 14:21:47.646980    2812 start.go:228] waiting for startup goroutines ...
	I1025 14:21:47.647000    2812 start.go:233] waiting for cluster config update ...
	I1025 14:21:47.647045    2812 start.go:242] writing updated cluster config ...
	I1025 14:21:47.648225    2812 ssh_runner.go:195] Run: rm -f paused
	I1025 14:21:47.712510    2812 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I1025 14:21:47.716713    2812 out.go:177] 
	W1025 14:21:47.720738    2812 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1025 14:21:47.724663    2812 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1025 14:21:47.731615    2812 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-187000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-25 21:20:57 UTC, ends at Wed 2023-10-25 21:22:59 UTC. --
	Oct 25 21:22:33 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:33.574797753Z" level=warning msg="cleaning up after shim disconnected" id=ce6c13d14c8f3234b3ba28c2e3bb96bfa70f9ca625a124f3d5dfc4fee1d34a85 namespace=moby
	Oct 25 21:22:33 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:33.574801836Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:22:47 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:47.927593741Z" level=info msg="ignoring event" container=617b44bfa6be5609ea13bd89763bb668708132687c233d8bd19ad3280f1f86f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:22:47 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:47.928604332Z" level=info msg="shim disconnected" id=617b44bfa6be5609ea13bd89763bb668708132687c233d8bd19ad3280f1f86f0 namespace=moby
	Oct 25 21:22:47 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:47.928663874Z" level=warning msg="cleaning up after shim disconnected" id=617b44bfa6be5609ea13bd89763bb668708132687c233d8bd19ad3280f1f86f0 namespace=moby
	Oct 25 21:22:47 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:47.928674083Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.934194961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.934235919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.934248836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.934258419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:49.971974986Z" level=info msg="ignoring event" container=ad0793e348891ec7bbd5c443ba5d5cd3292aa17d82515dc9d139b23e28851be5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.972189487Z" level=info msg="shim disconnected" id=ad0793e348891ec7bbd5c443ba5d5cd3292aa17d82515dc9d139b23e28851be5 namespace=moby
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.972221279Z" level=warning msg="cleaning up after shim disconnected" id=ad0793e348891ec7bbd5c443ba5d5cd3292aa17d82515dc9d139b23e28851be5 namespace=moby
	Oct 25 21:22:49 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:49.972226196Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:54.377456199Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=25afda057bea6d54fc4a0f66aafb307f97e07739ac735231768771f99dd7345c
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:54.381742186Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=25afda057bea6d54fc4a0f66aafb307f97e07739ac735231768771f99dd7345c
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:54.469089434Z" level=info msg="ignoring event" container=25afda057bea6d54fc4a0f66aafb307f97e07739ac735231768771f99dd7345c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.469649021Z" level=info msg="shim disconnected" id=25afda057bea6d54fc4a0f66aafb307f97e07739ac735231768771f99dd7345c namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.469719729Z" level=warning msg="cleaning up after shim disconnected" id=25afda057bea6d54fc4a0f66aafb307f97e07739ac735231768771f99dd7345c namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.469731480Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1090]: time="2023-10-25T21:22:54.514966989Z" level=info msg="ignoring event" container=3a7d8b5c51efe13b359fbccdae27b485e74a01e33fdc93ec34ee616802a93260 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.515179240Z" level=info msg="shim disconnected" id=3a7d8b5c51efe13b359fbccdae27b485e74a01e33fdc93ec34ee616802a93260 namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.515213282Z" level=warning msg="cleaning up after shim disconnected" id=3a7d8b5c51efe13b359fbccdae27b485e74a01e33fdc93ec34ee616802a93260 namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.515218824Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 21:22:54 ingress-addon-legacy-187000 dockerd[1098]: time="2023-10-25T21:22:54.520430983Z" level=warning msg="cleanup warnings time=\"2023-10-25T21:22:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	ad0793e34889   97e050c3e21e                               "/hello-app"             10 seconds ago       Exited (1) 9 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-zlkf4_default_862ce692-8c71-435e-8213-f6ae59c34f04_2
	79cb6ff23569   k8s.gcr.io/pause:3.2                       "/pause"                 28 seconds ago       Up 27 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-zlkf4_default_862ce692-8c71-435e-8213-f6ae59c34f04_0
	cb7bd170ced4   nginx                                      "/docker-entrypoint.…"   34 seconds ago       Up 34 seconds                             k8s_nginx_nginx_default_2731b564-f919-420a-8abc-ce53087de252_0
	0ec52c8d9c0f   k8s.gcr.io/pause:3.2                       "/pause"                 37 seconds ago       Up 37 seconds                             k8s_POD_nginx_default_2731b564-f919-420a-8abc-ce53087de252_0
	617b44bfa6be   k8s.gcr.io/pause:3.2                       "/pause"                 50 seconds ago       Exited (137) 11 seconds ago               k8s_POD_kube-ingress-dns-minikube_kube-system_09975fa0-b84c-4c0b-afe8-6ad698429e71_0
	25afda057bea   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   52 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-w7fdh_ingress-nginx_f4b91409-affd-452c-874e-1d91e05e54fe_0
	3a7d8b5c51ef   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-w7fdh_ingress-nginx_f4b91409-affd-452c-874e-1d91e05e54fe_0
	99b6b3ca7492   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-zrkfk_ingress-nginx_55973098-1a4c-4b83-98a4-79cd69c622bf_0
	e3c8b0d7eced   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-p949h_ingress-nginx_f0cd8c17-60df-42bf-ba35-f18e56a5d674_0
	864c1841b34b   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-zrkfk_ingress-nginx_55973098-1a4c-4b83-98a4-79cd69c622bf_0
	b3e712b39331   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-p949h_ingress-nginx_f0cd8c17-60df-42bf-ba35-f18e56a5d674_0
	5a4d3f049fc2   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-ndztc_kube-system_ab052388-8b95-4a58-bac8-c6ff32744223_0
	17830ed38960   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-ndztc_kube-system_ab052388-8b95-4a58-bac8-c6ff32744223_0
	b2363a74ad55   gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   About a minute ago   Up About a minute                         k8s_storage-provisioner_storage-provisioner_kube-system_b07e203b-b26c-423f-95ad-b55a5cebf3b1_0
	d94bdb373662   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_storage-provisioner_kube-system_b07e203b-b26c-423f-95ad-b55a5cebf3b1_0
	a7186f39d525   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-zgzl2_kube-system_75c1a762-d539-4d2a-8102-91b0cc388597_0
	1158557234dd   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-zgzl2_kube-system_75c1a762-d539-4d2a-8102-91b0cc388597_0
	441dc0796026   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-187000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	baa8a7e15c2c   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-187000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	27b09020eecf   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-187000_kube-system_a6a5283b19a23612d0b3bb3b04ef22ef_0
	d9d1966cf36b   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-187000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	1d3ca0f825fa   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-187000_kube-system_a6a5283b19a23612d0b3bb3b04ef22ef_0
	50e4e1c5d612   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-187000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	1a8d2b3bfd3c   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-187000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	8e4b7328ae7b   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-187000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	time="2023-10-25T21:22:59Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [5a4d3f049fc2] <==
	* [INFO] 172.17.0.1:42130 - 39784 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034s
	[INFO] 172.17.0.1:43149 - 25817 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012958s
	[INFO] 172.17.0.1:42130 - 52869 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024251s
	[INFO] 172.17.0.1:43149 - 44440 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076875s
	[INFO] 172.17.0.1:42130 - 43427 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029542s
	[INFO] 172.17.0.1:43149 - 48080 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016041s
	[INFO] 172.17.0.1:42130 - 63676 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023333s
	[INFO] 172.17.0.1:43149 - 45178 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007084s
	[INFO] 172.17.0.1:42130 - 38191 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037125s
	[INFO] 172.17.0.1:43149 - 53350 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008042s
	[INFO] 172.17.0.1:43149 - 8541 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014333s
	[INFO] 172.17.0.1:2077 - 23801 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031167s
	[INFO] 172.17.0.1:2077 - 27163 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019709s
	[INFO] 172.17.0.1:2077 - 16411 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014083s
	[INFO] 172.17.0.1:2077 - 63028 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014375s
	[INFO] 172.17.0.1:2077 - 24387 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001275s
	[INFO] 172.17.0.1:2077 - 7073 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012792s
	[INFO] 172.17.0.1:2077 - 24241 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014083s
	[INFO] 172.17.0.1:34611 - 24445 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012501s
	[INFO] 172.17.0.1:34611 - 28582 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014001s
	[INFO] 172.17.0.1:34611 - 21133 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008708s
	[INFO] 172.17.0.1:34611 - 52417 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011792s
	[INFO] 172.17.0.1:34611 - 51399 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012958s
	[INFO] 172.17.0.1:34611 - 16063 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011833s
	[INFO] 172.17.0.1:34611 - 25330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012583s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-187000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-187000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=ingress-addon-legacy-187000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T14_21_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 21:21:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-187000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Oct 2023 21:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 21:22:31 +0000   Wed, 25 Oct 2023 21:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 21:22:31 +0000   Wed, 25 Oct 2023 21:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 21:22:31 +0000   Wed, 25 Oct 2023 21:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 21:22:31 +0000   Wed, 25 Oct 2023 21:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-187000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003128Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003128Ki
	  pods:               110
	System Info:
	  Machine ID:                 577085358c6245dbb03466c9a6a23d11
	  System UUID:                577085358c6245dbb03466c9a6a23d11
	  Boot ID:                    cebacb8f-c831-430c-9b6b-15e2d5a7b2ab
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-zlkf4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-ndztc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-187000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-187000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-187000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-zgzl2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-ingress-addon-legacy-187000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 88s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s   kubelet     Node ingress-addon-legacy-187000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s   kubelet     Node ingress-addon-legacy-187000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s   kubelet     Node ingress-addon-legacy-187000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s   kubelet     Node ingress-addon-legacy-187000 status is now: NodeReady
	  Normal  Starting                 78s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct25 21:20] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.648565] EINJ: EINJ table not found.
	[  +0.524195] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043998] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000860] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Oct25 21:21] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.073002] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.424096] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +0.180095] systemd-fstab-generator[836]: Ignoring "noauto" for root device
	[  +0.076394] systemd-fstab-generator[847]: Ignoring "noauto" for root device
	[  +0.079068] systemd-fstab-generator[860]: Ignoring "noauto" for root device
	[  +4.308981] systemd-fstab-generator[1066]: Ignoring "noauto" for root device
	[  +1.584414] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.035011] systemd-fstab-generator[1538]: Ignoring "noauto" for root device
	[  +7.922120] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.090936] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.232881] systemd-fstab-generator[2624]: Ignoring "noauto" for root device
	[ +15.954208] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.734112] kauditd_printk_skb: 9 callbacks suppressed
	[  +4.423418] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Oct25 21:22] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.439263] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [27b09020eecf] <==
	* raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/10/25 21:21:20 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-25 21:21:20.348657 W | auth: simple token is not cryptographically signed
	2023-10-25 21:21:20.349910 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-25 21:21:20.350620 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-25 21:21:20.350954 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-10-25 21:21:20.351668 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-25 21:21:20.351753 I | embed: listening for peers on 192.168.105.6:2380
	2023-10-25 21:21:20.351858 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/10/25 21:21:20 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/10/25 21:21:20 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-10-25 21:21:20.956399 I | etcdserver: published {Name:ingress-addon-legacy-187000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-10-25 21:21:20.956472 I | embed: ready to serve client requests
	2023-10-25 21:21:20.956509 I | embed: ready to serve client requests
	2023-10-25 21:21:20.957211 I | embed: serving client requests on 192.168.105.6:2379
	2023-10-25 21:21:20.957254 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-25 21:21:20.957587 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-25 21:21:20.959171 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-25 21:21:20.959224 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  21:22:59 up 2 min,  0 users,  load average: 0.81, 0.32, 0.11
	Linux ingress-addon-legacy-187000 5.10.57 #1 SMP PREEMPT Mon Oct 16 17:34:05 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d9d1966cf36b] <==
	* I1025 21:21:22.702718       1 cache.go:39] Caches are synced for autoregister controller
	I1025 21:21:22.702782       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 21:21:22.702822       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1025 21:21:22.702836       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 21:21:22.706253       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1025 21:21:23.602893       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1025 21:21:23.603187       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1025 21:21:23.612655       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1025 21:21:23.618226       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1025 21:21:23.618266       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1025 21:21:23.755065       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 21:21:23.768680       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1025 21:21:23.874836       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I1025 21:21:23.875309       1 controller.go:609] quota admission added evaluator for: endpoints
	I1025 21:21:23.876941       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 21:21:24.920408       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1025 21:21:25.447858       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1025 21:21:25.663562       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1025 21:21:31.796384       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 21:21:40.669339       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1025 21:21:40.749235       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1025 21:21:48.029565       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1025 21:22:21.731527       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1025 21:22:52.375474       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1025 21:22:52.914517       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [baa8a7e15c2c] <==
	* I1025 21:21:40.870631       1 disruption.go:339] Sending events to api server.
	I1025 21:21:40.891653       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1025 21:21:40.904949       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"746bedf1-88f3-49b3-ab85-872eab6f27c0", APIVersion:"apps/v1", ResourceVersion:"340", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1025 21:21:40.928614       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"7aab8be3-9c15-407e-a049-a1aa96f3473c", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-n47tx
	I1025 21:21:40.955271       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1025 21:21:41.017704       1 shared_informer.go:230] Caches are synced for endpoint 
	I1025 21:21:41.019820       1 shared_informer.go:230] Caches are synced for HPA 
	I1025 21:21:41.065415       1 shared_informer.go:230] Caches are synced for resource quota 
	I1025 21:21:41.078615       1 shared_informer.go:230] Caches are synced for resource quota 
	I1025 21:21:41.130738       1 shared_informer.go:230] Caches are synced for attach detach 
	I1025 21:21:41.167278       1 shared_informer.go:230] Caches are synced for expand 
	I1025 21:21:41.168487       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1025 21:21:41.173601       1 shared_informer.go:230] Caches are synced for PV protection 
	I1025 21:21:41.224835       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:21:41.224846       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 21:21:41.261363       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1025 21:21:48.025627       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f043c51e-768e-4861-bc5f-e13bc7e81bdc", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1025 21:21:48.031254       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1702c4ad-4f22-47e5-8ea1-bb76304ba995", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-w7fdh
	I1025 21:21:48.034623       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3cf99325-d6f7-4602-abad-5ff8de8fc902", APIVersion:"batch/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-p949h
	I1025 21:21:48.063039       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"be1e34bf-8290-4bfb-b730-99e487224a90", APIVersion:"batch/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zrkfk
	I1025 21:21:51.044242       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"be1e34bf-8290-4bfb-b730-99e487224a90", APIVersion:"batch/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:21:51.050966       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3cf99325-d6f7-4602-abad-5ff8de8fc902", APIVersion:"batch/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1025 21:22:31.013485       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1f0e34e2-c5ed-46dd-8b1f-e7599047e36c", APIVersion:"apps/v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1025 21:22:31.016042       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7c373129-1f6c-47ea-99d7-d32633c0d243", APIVersion:"apps/v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-zlkf4
	E1025 21:22:57.163381       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-25skl" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [a7186f39d525] <==
	* W1025 21:21:41.322289       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1025 21:21:41.326402       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I1025 21:21:41.326421       1 server_others.go:186] Using iptables Proxier.
	I1025 21:21:41.326701       1 server.go:583] Version: v1.18.20
	I1025 21:21:41.328108       1 config.go:315] Starting service config controller
	I1025 21:21:41.328124       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1025 21:21:41.329065       1 config.go:133] Starting endpoints config controller
	I1025 21:21:41.329068       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1025 21:21:41.428229       1 shared_informer.go:230] Caches are synced for service config 
	I1025 21:21:41.429154       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [441dc0796026] <==
	* W1025 21:21:22.618134       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 21:21:22.655995       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:21:22.656008       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1025 21:21:22.657019       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1025 21:21:22.657383       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:21:22.657420       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 21:21:22.657458       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1025 21:21:22.658185       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:21:22.658610       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:21:22.658661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:21:22.658886       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:21:22.658916       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:21:22.659046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:21:22.659089       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 21:21:22.659162       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:21:22.659210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 21:21:22.659235       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:21:22.659286       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 21:21:22.659361       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 21:21:23.513861       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:21:23.516215       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 21:21:23.516382       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:21:23.720672       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1025 21:21:23.857543       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1025 21:21:40.682897       1 factory.go:503] pod: kube-system/coredns-66bff467f8-n47tx is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-25 21:20:57 UTC, ends at Wed 2023-10-25 21:22:59 UTC. --
	Oct 25 21:22:35 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:35.581355    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ce6c13d14c8f3234b3ba28c2e3bb96bfa70f9ca625a124f3d5dfc4fee1d34a85
	Oct 25 21:22:35 ingress-addon-legacy-187000 kubelet[2630]: E1025 21:22:35.581846    2630 pod_workers.go:191] Error syncing pod 862ce692-8c71-435e-8213-f6ae59c34f04 ("hello-world-app-5f5d8b66bb-zlkf4_default(862ce692-8c71-435e-8213-f6ae59c34f04)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-zlkf4_default(862ce692-8c71-435e-8213-f6ae59c34f04)"
	Oct 25 21:22:45 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:45.883302    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f2faaa8e2bf2764978c0aebf5822ff5ca24e4520c502321a0feac08463f767ff
	Oct 25 21:22:45 ingress-addon-legacy-187000 kubelet[2630]: E1025 21:22:45.894708    2630 pod_workers.go:191] Error syncing pod 09975fa0-b84c-4c0b-afe8-6ad698429e71 ("kube-ingress-dns-minikube_kube-system(09975fa0-b84c-4c0b-afe8-6ad698429e71)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(09975fa0-b84c-4c0b-afe8-6ad698429e71)"
	Oct 25 21:22:46 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:46.495468    2630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-qxwb7" (UniqueName: "kubernetes.io/secret/09975fa0-b84c-4c0b-afe8-6ad698429e71-minikube-ingress-dns-token-qxwb7") pod "09975fa0-b84c-4c0b-afe8-6ad698429e71" (UID: "09975fa0-b84c-4c0b-afe8-6ad698429e71")
	Oct 25 21:22:46 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:46.500799    2630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09975fa0-b84c-4c0b-afe8-6ad698429e71-minikube-ingress-dns-token-qxwb7" (OuterVolumeSpecName: "minikube-ingress-dns-token-qxwb7") pod "09975fa0-b84c-4c0b-afe8-6ad698429e71" (UID: "09975fa0-b84c-4c0b-afe8-6ad698429e71"). InnerVolumeSpecName "minikube-ingress-dns-token-qxwb7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:22:46 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:46.595866    2630 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-qxwb7" (UniqueName: "kubernetes.io/secret/09975fa0-b84c-4c0b-afe8-6ad698429e71-minikube-ingress-dns-token-qxwb7") on node "ingress-addon-legacy-187000" DevicePath ""
	Oct 25 21:22:48 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:48.806094    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f2faaa8e2bf2764978c0aebf5822ff5ca24e4520c502321a0feac08463f767ff
	Oct 25 21:22:49 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:49.880922    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ce6c13d14c8f3234b3ba28c2e3bb96bfa70f9ca625a124f3d5dfc4fee1d34a85
	Oct 25 21:22:50 ingress-addon-legacy-187000 kubelet[2630]: W1025 21:22:49.984697    2630 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod862ce692-8c71-435e-8213-f6ae59c34f04/ad0793e348891ec7bbd5c443ba5d5cd3292aa17d82515dc9d139b23e28851be5": none of the resources are being tracked.
	Oct 25 21:22:50 ingress-addon-legacy-187000 kubelet[2630]: W1025 21:22:50.854429    2630 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-zlkf4 through plugin: invalid network status for
	Oct 25 21:22:50 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:50.862772    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ce6c13d14c8f3234b3ba28c2e3bb96bfa70f9ca625a124f3d5dfc4fee1d34a85
	Oct 25 21:22:50 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:50.863485    2630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ad0793e348891ec7bbd5c443ba5d5cd3292aa17d82515dc9d139b23e28851be5
	Oct 25 21:22:50 ingress-addon-legacy-187000 kubelet[2630]: E1025 21:22:50.863978    2630 pod_workers.go:191] Error syncing pod 862ce692-8c71-435e-8213-f6ae59c34f04 ("hello-world-app-5f5d8b66bb-zlkf4_default(862ce692-8c71-435e-8213-f6ae59c34f04)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-zlkf4_default(862ce692-8c71-435e-8213-f6ae59c34f04)"
	Oct 25 21:22:51 ingress-addon-legacy-187000 kubelet[2630]: W1025 21:22:51.879534    2630 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-zlkf4 through plugin: invalid network status for
	Oct 25 21:22:52 ingress-addon-legacy-187000 kubelet[2630]: E1025 21:22:52.368964    2630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w7fdh.179176a1eef3ed4d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w7fdh", UID:"f4b91409-affd-452c-874e-1d91e05e54fe", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-187000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1467fab15d3354d, ext:86948353765, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1467fab15d3354d, ext:86948353765, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w7fdh.179176a1eef3ed4d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:22:52 ingress-addon-legacy-187000 kubelet[2630]: E1025 21:22:52.376989    2630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w7fdh.179176a1eef3ed4d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w7fdh", UID:"f4b91409-affd-452c-874e-1d91e05e54fe", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-187000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1467fab15d3354d, ext:86948353765, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1467fab164bd689, ext:86956259361, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w7fdh.179176a1eef3ed4d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 25 21:22:54 ingress-addon-legacy-187000 kubelet[2630]: W1025 21:22:54.959693    2630 pod_container_deletor.go:77] Container "3a7d8b5c51efe13b359fbccdae27b485e74a01e33fdc93ec34ee616802a93260" not found in pod's containers
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.623170    2630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-jxf8c" (UniqueName: "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-ingress-nginx-token-jxf8c") pod "f4b91409-affd-452c-874e-1d91e05e54fe" (UID: "f4b91409-affd-452c-874e-1d91e05e54fe")
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.628383    2630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-webhook-cert") pod "f4b91409-affd-452c-874e-1d91e05e54fe" (UID: "f4b91409-affd-452c-874e-1d91e05e54fe")
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.632644    2630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f4b91409-affd-452c-874e-1d91e05e54fe" (UID: "f4b91409-affd-452c-874e-1d91e05e54fe"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.633516    2630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-ingress-nginx-token-jxf8c" (OuterVolumeSpecName: "ingress-nginx-token-jxf8c") pod "f4b91409-affd-452c-874e-1d91e05e54fe" (UID: "f4b91409-affd-452c-874e-1d91e05e54fe"). InnerVolumeSpecName "ingress-nginx-token-jxf8c". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.729816    2630 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-webhook-cert") on node "ingress-addon-legacy-187000" DevicePath ""
	Oct 25 21:22:56 ingress-addon-legacy-187000 kubelet[2630]: I1025 21:22:56.729942    2630 reconciler.go:319] Volume detached for volume "ingress-nginx-token-jxf8c" (UniqueName: "kubernetes.io/secret/f4b91409-affd-452c-874e-1d91e05e54fe-ingress-nginx-token-jxf8c") on node "ingress-addon-legacy-187000" DevicePath ""
	Oct 25 21:22:57 ingress-addon-legacy-187000 kubelet[2630]: W1025 21:22:57.905179    2630 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/f4b91409-affd-452c-874e-1d91e05e54fe/volumes" does not exist
	
	* 
	* ==> storage-provisioner [b2363a74ad55] <==
	* I1025 21:21:42.959971       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:21:42.964506       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:21:42.964554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:21:42.969676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:21:42.969828       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-187000_d6a84807-1a44-4156-903b-2d7c8b96df7a!
	I1025 21:21:42.970199       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1c757e70-55ee-4767-a3e6-bd8470076b84", APIVersion:"v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-187000_d6a84807-1a44-4156-903b-2d7c8b96df7a became leader
	I1025 21:21:43.070720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-187000_d6a84807-1a44-4156-903b-2d7c8b96df7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-187000 -n ingress-addon-legacy-187000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-187000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (50.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-285000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-285000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.947821s)

                                                
                                                
-- stdout --
	* [mount-start-1-285000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-285000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-285000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-285000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-285000 -n mount-start-1-285000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-285000 -n mount-start-1-285000: exit status 7 (71.266041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-285000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-418000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-418000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.016328417s)

                                                
                                                
-- stdout --
	* [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-418000 in cluster multinode-418000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:25:15.038932    3199 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:25:15.039092    3199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:25:15.039095    3199 out.go:309] Setting ErrFile to fd 2...
	I1025 14:25:15.039097    3199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:25:15.039212    3199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:25:15.040266    3199 out.go:303] Setting JSON to false
	I1025 14:25:15.056573    3199 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1489,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:25:15.056656    3199 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:25:15.062436    3199 out.go:177] * [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:25:15.069429    3199 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:25:15.073439    3199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:25:15.069499    3199 notify.go:220] Checking for updates...
	I1025 14:25:15.079380    3199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:25:15.085316    3199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:25:15.092370    3199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:25:15.095392    3199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:25:15.098538    3199 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:25:15.102386    3199 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:25:15.109364    3199 start.go:298] selected driver: qemu2
	I1025 14:25:15.109370    3199 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:25:15.109376    3199 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:25:15.111852    3199 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:25:15.116423    3199 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:25:15.119507    3199 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:25:15.119529    3199 cni.go:84] Creating CNI manager for ""
	I1025 14:25:15.119532    3199 cni.go:136] 0 nodes found, recommending kindnet
	I1025 14:25:15.119539    3199 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 14:25:15.119545    3199 start_flags.go:323] config:
	{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:25:15.124671    3199 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:25:15.131446    3199 out.go:177] * Starting control plane node multinode-418000 in cluster multinode-418000
	I1025 14:25:15.135434    3199 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:25:15.135450    3199 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:25:15.135461    3199 cache.go:56] Caching tarball of preloaded images
	I1025 14:25:15.135512    3199 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:25:15.135518    3199 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:25:15.135740    3199 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/multinode-418000/config.json ...
	I1025 14:25:15.135751    3199 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/multinode-418000/config.json: {Name:mk8cb4fd597ed97b299233fdbd60cda00cdbb5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:25:15.135953    3199 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:25:15.135981    3199 start.go:369] acquired machines lock for "multinode-418000" in 22.75µs
	I1025 14:25:15.135991    3199 start.go:93] Provisioning new machine with config: &{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:25:15.136016    3199 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:25:15.140407    3199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:25:15.157295    3199 start.go:159] libmachine.API.Create for "multinode-418000" (driver="qemu2")
	I1025 14:25:15.157321    3199 client.go:168] LocalClient.Create starting
	I1025 14:25:15.157376    3199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:25:15.157407    3199 main.go:141] libmachine: Decoding PEM data...
	I1025 14:25:15.157418    3199 main.go:141] libmachine: Parsing certificate...
	I1025 14:25:15.157454    3199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:25:15.157473    3199 main.go:141] libmachine: Decoding PEM data...
	I1025 14:25:15.157481    3199 main.go:141] libmachine: Parsing certificate...
	I1025 14:25:15.157820    3199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:25:15.275118    3199 main.go:141] libmachine: Creating SSH key...
	I1025 14:25:15.526149    3199 main.go:141] libmachine: Creating Disk image...
	I1025 14:25:15.526157    3199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:25:15.526328    3199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:15.538924    3199 main.go:141] libmachine: STDOUT: 
	I1025 14:25:15.538939    3199 main.go:141] libmachine: STDERR: 
	I1025 14:25:15.539002    3199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2 +20000M
	I1025 14:25:15.549632    3199 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:25:15.549655    3199 main.go:141] libmachine: STDERR: 
	I1025 14:25:15.549671    3199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:15.549678    3199 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:25:15.549720    3199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6d:47:48:cb:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:15.551480    3199 main.go:141] libmachine: STDOUT: 
	I1025 14:25:15.551495    3199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:25:15.551516    3199 client.go:171] LocalClient.Create took 394.189833ms
	I1025 14:25:17.553696    3199 start.go:128] duration metric: createHost completed in 2.417655667s
	I1025 14:25:17.553758    3199 start.go:83] releasing machines lock for "multinode-418000", held for 2.4177675s
	W1025 14:25:17.553795    3199 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:25:17.570371    3199 out.go:177] * Deleting "multinode-418000" in qemu2 ...
	W1025 14:25:17.594028    3199 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:25:17.594063    3199 start.go:706] Will try again in 5 seconds ...
	I1025 14:25:22.596367    3199 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:25:22.596824    3199 start.go:369] acquired machines lock for "multinode-418000" in 358.875µs
	I1025 14:25:22.596954    3199 start.go:93] Provisioning new machine with config: &{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:25:22.597156    3199 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:25:22.607898    3199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:25:22.657052    3199 start.go:159] libmachine.API.Create for "multinode-418000" (driver="qemu2")
	I1025 14:25:22.657097    3199 client.go:168] LocalClient.Create starting
	I1025 14:25:22.657222    3199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:25:22.657273    3199 main.go:141] libmachine: Decoding PEM data...
	I1025 14:25:22.657297    3199 main.go:141] libmachine: Parsing certificate...
	I1025 14:25:22.657363    3199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:25:22.657407    3199 main.go:141] libmachine: Decoding PEM data...
	I1025 14:25:22.657421    3199 main.go:141] libmachine: Parsing certificate...
	I1025 14:25:22.657891    3199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:25:22.785540    3199 main.go:141] libmachine: Creating SSH key...
	I1025 14:25:22.957263    3199 main.go:141] libmachine: Creating Disk image...
	I1025 14:25:22.957270    3199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:25:22.957448    3199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:22.970129    3199 main.go:141] libmachine: STDOUT: 
	I1025 14:25:22.970146    3199 main.go:141] libmachine: STDERR: 
	I1025 14:25:22.970216    3199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2 +20000M
	I1025 14:25:22.981018    3199 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:25:22.981031    3199 main.go:141] libmachine: STDERR: 
	I1025 14:25:22.981046    3199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:22.981051    3199 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:25:22.981098    3199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:33:1f:1b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:25:22.982823    3199 main.go:141] libmachine: STDOUT: 
	I1025 14:25:22.982836    3199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:25:22.982847    3199 client.go:171] LocalClient.Create took 325.741792ms
	I1025 14:25:24.985021    3199 start.go:128] duration metric: createHost completed in 2.387809459s
	I1025 14:25:24.985085    3199 start.go:83] releasing machines lock for "multinode-418000", held for 2.388235s
	W1025 14:25:24.985482    3199 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:25:24.993150    3199 out.go:177] 
	W1025 14:25:24.998180    3199 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:25:24.998223    3199 out.go:239] * 
	* 
	W1025 14:25:25.000954    3199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:25:25.010054    3199 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-418000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (68.532875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (121.4325ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-418000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- rollout status deployment/busybox: exit status 1 (59.087792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.711708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.47275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.373667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1025 14:25:28.733697    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.824458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.624167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.4495ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.901541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.380583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.830417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1025 14:26:50.656317    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.473041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.712958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.129583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.657542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.558459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.419917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-418000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.570375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.425209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-418000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-418000 -v 3 --alsologtostderr: exit status 89 (42.478417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-418000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:26:56.938829    3304 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:26:56.939033    3304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:56.939036    3304 out.go:309] Setting ErrFile to fd 2...
	I1025 14:26:56.939038    3304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:56.939178    3304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:26:56.939407    3304 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:26:56.939602    3304 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:26:56.944540    3304 out.go:177] * The control plane node must be running for this command
	I1025 14:26:56.947568    3304 out.go:177]   To start a cluster, run: "minikube start -p multinode-418000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-418000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.399375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-418000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-418000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-418000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.3\",\"ClusterName\":\"multinode-418000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.228333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status --output json --alsologtostderr: exit status 7 (31.7645ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-418000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:26:57.123031    3314 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:26:57.123221    3314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.123224    3314 out.go:309] Setting ErrFile to fd 2...
	I1025 14:26:57.123226    3314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.123353    3314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:26:57.123484    3314 out.go:303] Setting JSON to true
	I1025 14:26:57.123496    3314 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:26:57.123565    3314 notify.go:220] Checking for updates...
	I1025 14:26:57.123690    3314 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:26:57.123695    3314 status.go:255] checking status of multinode-418000 ...
	I1025 14:26:57.123888    3314 status.go:330] multinode-418000 host status = "Stopped" (err=<nil>)
	I1025 14:26:57.123892    3314 status.go:343] host is not running, skipping remaining checks
	I1025 14:26:57.123895    3314 status.go:257] multinode-418000 status: &{Name:multinode-418000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-418000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.47725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 node stop m03: exit status 85 (49.45725ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-418000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status: exit status 7 (32.357167ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr: exit status 7 (31.837959ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:26:57.269862    3322 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:26:57.270041    3322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.270044    3322 out.go:309] Setting ErrFile to fd 2...
	I1025 14:26:57.270047    3322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.270194    3322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:26:57.270316    3322 out.go:303] Setting JSON to false
	I1025 14:26:57.270327    3322 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:26:57.270398    3322 notify.go:220] Checking for updates...
	I1025 14:26:57.270550    3322 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:26:57.270555    3322 status.go:255] checking status of multinode-418000 ...
	I1025 14:26:57.270770    3322 status.go:330] multinode-418000 host status = "Stopped" (err=<nil>)
	I1025 14:26:57.270774    3322 status.go:343] host is not running, skipping remaining checks
	I1025 14:26:57.270776    3322 status.go:257] multinode-418000 status: &{Name:multinode-418000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr": multinode-418000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.272042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 node start m03 --alsologtostderr: exit status 85 (47.126625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:26:57.335019    3326 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:26:57.335256    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.335259    3326 out.go:309] Setting ErrFile to fd 2...
	I1025 14:26:57.335262    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.335396    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:26:57.335622    3326 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:26:57.335826    3326 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:26:57.340488    3326 out.go:177] 
	W1025 14:26:57.341662    3326 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1025 14:26:57.341666    3326 out.go:239] * 
	* 
	W1025 14:26:57.343110    3326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:26:57.346421    3326 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1025 14:26:57.335019    3326 out.go:296] Setting OutFile to fd 1 ...
I1025 14:26:57.335256    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:26:57.335259    3326 out.go:309] Setting ErrFile to fd 2...
I1025 14:26:57.335262    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:26:57.335396    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:26:57.335622    3326 mustload.go:65] Loading cluster: multinode-418000
I1025 14:26:57.335826    3326 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:26:57.340488    3326 out.go:177] 
W1025 14:26:57.341662    3326 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1025 14:26:57.341666    3326 out.go:239] * 
* 
W1025 14:26:57.343110    3326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 14:26:57.346421    3326 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-418000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status: exit status 7 (32.405708ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-418000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.078333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-418000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-418000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-418000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-418000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.185750333s)

                                                
                                                
-- stdout --
	* [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-418000 in cluster multinode-418000
	* Restarting existing qemu2 VM for "multinode-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:26:57.543594    3336 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:26:57.543740    3336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.543743    3336 out.go:309] Setting ErrFile to fd 2...
	I1025 14:26:57.543746    3336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:26:57.543868    3336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:26:57.544939    3336 out.go:303] Setting JSON to false
	I1025 14:26:57.560849    3336 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1591,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:26:57.560934    3336 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:26:57.565481    3336 out.go:177] * [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:26:57.572495    3336 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:26:57.576381    3336 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:26:57.572609    3336 notify.go:220] Checking for updates...
	I1025 14:26:57.580442    3336 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:26:57.583443    3336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:26:57.586437    3336 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:26:57.589451    3336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:26:57.592755    3336 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:26:57.592803    3336 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:26:57.597399    3336 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:26:57.604480    3336 start.go:298] selected driver: qemu2
	I1025 14:26:57.604486    3336 start.go:902] validating driver "qemu2" against &{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:26:57.604537    3336 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:26:57.606684    3336 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:26:57.606740    3336 cni.go:84] Creating CNI manager for ""
	I1025 14:26:57.606745    3336 cni.go:136] 1 nodes found, recommending kindnet
	I1025 14:26:57.606750    3336 start_flags.go:323] config:
	{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:26:57.610790    3336 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:26:57.618382    3336 out.go:177] * Starting control plane node multinode-418000 in cluster multinode-418000
	I1025 14:26:57.628938    3336 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:26:57.628952    3336 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:26:57.628958    3336 cache.go:56] Caching tarball of preloaded images
	I1025 14:26:57.629004    3336 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:26:57.629009    3336 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:26:57.629067    3336 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/multinode-418000/config.json ...
	I1025 14:26:57.629406    3336 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:26:57.629435    3336 start.go:369] acquired machines lock for "multinode-418000" in 23.209µs
	I1025 14:26:57.629442    3336 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:26:57.629447    3336 fix.go:54] fixHost starting: 
	I1025 14:26:57.629550    3336 fix.go:102] recreateIfNeeded on multinode-418000: state=Stopped err=<nil>
	W1025 14:26:57.629560    3336 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:26:57.633477    3336 out.go:177] * Restarting existing qemu2 VM for "multinode-418000" ...
	I1025 14:26:57.640452    3336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:33:1f:1b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:26:57.642430    3336 main.go:141] libmachine: STDOUT: 
	I1025 14:26:57.642445    3336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:26:57.642470    3336 fix.go:56] fixHost completed within 13.023542ms
	I1025 14:26:57.642475    3336 start.go:83] releasing machines lock for "multinode-418000", held for 13.036333ms
	W1025 14:26:57.642480    3336 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:26:57.642520    3336 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:26:57.642525    3336 start.go:706] Will try again in 5 seconds ...
	I1025 14:27:02.644818    3336 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:27:02.645205    3336 start.go:369] acquired machines lock for "multinode-418000" in 293.041µs
	I1025 14:27:02.645349    3336 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:27:02.645374    3336 fix.go:54] fixHost starting: 
	I1025 14:27:02.646107    3336 fix.go:102] recreateIfNeeded on multinode-418000: state=Stopped err=<nil>
	W1025 14:27:02.646129    3336 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:27:02.651482    3336 out.go:177] * Restarting existing qemu2 VM for "multinode-418000" ...
	I1025 14:27:02.658723    3336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:33:1f:1b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:27:02.666598    3336 main.go:141] libmachine: STDOUT: 
	I1025 14:27:02.666651    3336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:27:02.666718    3336 fix.go:56] fixHost completed within 21.344916ms
	I1025 14:27:02.666738    3336 start.go:83] releasing machines lock for "multinode-418000", held for 21.505583ms
	W1025 14:27:02.666914    3336 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:02.674310    3336 out.go:177] 
	W1025 14:27:02.677481    3336 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:27:02.677508    3336 out.go:239] * 
	* 
	W1025 14:27:02.679005    3336 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:27:02.688379    3336 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-418000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-418000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (34.277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 node delete m03: exit status 89 (42.22075ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-418000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-418000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr: exit status 7 (32.015791ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:27:02.871469    3350 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:27:02.871654    3350 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:02.871657    3350 out.go:309] Setting ErrFile to fd 2...
	I1025 14:27:02.871659    3350 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:02.871794    3350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:27:02.871911    3350 out.go:303] Setting JSON to false
	I1025 14:27:02.871925    3350 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:27:02.871972    3350 notify.go:220] Checking for updates...
	I1025 14:27:02.872150    3350 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:27:02.872155    3350 status.go:255] checking status of multinode-418000 ...
	I1025 14:27:02.872352    3350 status.go:330] multinode-418000 host status = "Stopped" (err=<nil>)
	I1025 14:27:02.872356    3350 status.go:343] host is not running, skipping remaining checks
	I1025 14:27:02.872358    3350 status.go:257] multinode-418000 status: &{Name:multinode-418000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.228875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status: exit status 7 (32.508958ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr: exit status 7 (32.044917ms)

                                                
                                                
-- stdout --
	multinode-418000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:27:03.032243    3358 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:27:03.032405    3358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:03.032408    3358 out.go:309] Setting ErrFile to fd 2...
	I1025 14:27:03.032411    3358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:03.032555    3358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:27:03.032672    3358 out.go:303] Setting JSON to false
	I1025 14:27:03.032685    3358 mustload.go:65] Loading cluster: multinode-418000
	I1025 14:27:03.032752    3358 notify.go:220] Checking for updates...
	I1025 14:27:03.032910    3358 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:27:03.032914    3358 status.go:255] checking status of multinode-418000 ...
	I1025 14:27:03.033109    3358 status.go:330] multinode-418000 host status = "Stopped" (err=<nil>)
	I1025 14:27:03.033112    3358 status.go:343] host is not running, skipping remaining checks
	I1025 14:27:03.033114    3358 status.go:257] multinode-418000 status: &{Name:multinode-418000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr": multinode-418000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-418000 status --alsologtostderr": multinode-418000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (31.818625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-418000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-418000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180232875s)

                                                
                                                
-- stdout --
	* [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-418000 in cluster multinode-418000
	* Restarting existing qemu2 VM for "multinode-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:27:03.096087    3362 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:27:03.096243    3362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:03.096246    3362 out.go:309] Setting ErrFile to fd 2...
	I1025 14:27:03.096249    3362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:03.096407    3362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:27:03.097363    3362 out.go:303] Setting JSON to false
	I1025 14:27:03.113242    3362 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1597,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:27:03.113316    3362 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:27:03.118236    3362 out.go:177] * [multinode-418000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:27:03.125201    3362 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:27:03.125268    3362 notify.go:220] Checking for updates...
	I1025 14:27:03.129228    3362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:27:03.130720    3362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:27:03.134207    3362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:27:03.137205    3362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:27:03.140273    3362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:27:03.143618    3362 config.go:182] Loaded profile config "multinode-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:27:03.143869    3362 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:27:03.148193    3362 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:27:03.155171    3362 start.go:298] selected driver: qemu2
	I1025 14:27:03.155179    3362 start.go:902] validating driver "qemu2" against &{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:27:03.155224    3362 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:27:03.157465    3362 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:27:03.157490    3362 cni.go:84] Creating CNI manager for ""
	I1025 14:27:03.157494    3362 cni.go:136] 1 nodes found, recommending kindnet
	I1025 14:27:03.157500    3362 start_flags.go:323] config:
	{Name:multinode-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-418000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:27:03.161828    3362 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:03.168092    3362 out.go:177] * Starting control plane node multinode-418000 in cluster multinode-418000
	I1025 14:27:03.172180    3362 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:27:03.172200    3362 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:27:03.172209    3362 cache.go:56] Caching tarball of preloaded images
	I1025 14:27:03.172256    3362 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:27:03.172264    3362 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:27:03.172337    3362 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/multinode-418000/config.json ...
	I1025 14:27:03.172714    3362 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:27:03.172741    3362 start.go:369] acquired machines lock for "multinode-418000" in 21.291µs
	I1025 14:27:03.172749    3362 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:27:03.172754    3362 fix.go:54] fixHost starting: 
	I1025 14:27:03.172865    3362 fix.go:102] recreateIfNeeded on multinode-418000: state=Stopped err=<nil>
	W1025 14:27:03.172873    3362 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:27:03.176169    3362 out.go:177] * Restarting existing qemu2 VM for "multinode-418000" ...
	I1025 14:27:03.184274    3362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:33:1f:1b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:27:03.186350    3362 main.go:141] libmachine: STDOUT: 
	I1025 14:27:03.186369    3362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:27:03.186409    3362 fix.go:56] fixHost completed within 13.647458ms
	I1025 14:27:03.186415    3362 start.go:83] releasing machines lock for "multinode-418000", held for 13.669ms
	W1025 14:27:03.186420    3362 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:27:03.186453    3362 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:03.186457    3362 start.go:706] Will try again in 5 seconds ...
	I1025 14:27:08.188642    3362 start.go:365] acquiring machines lock for multinode-418000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:27:08.189080    3362 start.go:369] acquired machines lock for "multinode-418000" in 343.834µs
	I1025 14:27:08.189191    3362 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:27:08.189216    3362 fix.go:54] fixHost starting: 
	I1025 14:27:08.189922    3362 fix.go:102] recreateIfNeeded on multinode-418000: state=Stopped err=<nil>
	W1025 14:27:08.189950    3362 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:27:08.195838    3362 out.go:177] * Restarting existing qemu2 VM for "multinode-418000" ...
	I1025 14:27:08.200517    3362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c5:33:1f:1b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/multinode-418000/disk.qcow2
	I1025 14:27:08.209816    3362 main.go:141] libmachine: STDOUT: 
	I1025 14:27:08.209869    3362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:27:08.209941    3362 fix.go:56] fixHost completed within 20.727125ms
	I1025 14:27:08.209963    3362 start.go:83] releasing machines lock for "multinode-418000", held for 20.862959ms
	W1025 14:27:08.210171    3362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:08.217274    3362 out.go:177] 
	W1025 14:27:08.221350    3362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:27:08.221391    3362 out.go:239] * 
	* 
	W1025 14:27:08.223757    3362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:27:08.232403    3362 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-418000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (69.777834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-418000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-418000-m01 --driver=qemu2 
E1025 14:27:08.833034    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:08.839351    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:08.851390    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:08.873442    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:08.915513    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:08.997806    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:09.158682    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:09.480874    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:10.123273    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:11.405650    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:13.968038    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-418000-m01 --driver=qemu2 : exit status 80 (9.989890334s)

                                                
                                                
-- stdout --
	* [multinode-418000-m01] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-418000-m01 in cluster multinode-418000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-418000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-418000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-418000-m02 --driver=qemu2 
E1025 14:27:19.090705    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-418000-m02 --driver=qemu2 : exit status 80 (9.868323375s)

                                                
                                                
-- stdout --
	* [multinode-418000-m02] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-418000-m02 in cluster multinode-418000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-418000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-418000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-418000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-418000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-418000: exit status 89 (81.842541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-418000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-418000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-418000 -n multinode-418000: exit status 7 (32.713625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-570000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E1025 14:27:29.332733    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:27:36.790623    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-570000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.793361s)

                                                
                                                
-- stdout --
	* [test-preload-570000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-570000 in cluster test-preload-570000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:27:28.599479    3438 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:27:28.599619    3438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:28.599622    3438 out.go:309] Setting ErrFile to fd 2...
	I1025 14:27:28.599634    3438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:27:28.599754    3438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:27:28.600781    3438 out.go:303] Setting JSON to false
	I1025 14:27:28.616625    3438 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1622,"bootTime":1698267626,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:27:28.616691    3438 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:27:28.621535    3438 out.go:177] * [test-preload-570000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:27:28.629572    3438 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:27:28.629621    3438 notify.go:220] Checking for updates...
	I1025 14:27:28.636499    3438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:27:28.639574    3438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:27:28.649584    3438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:27:28.657485    3438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:27:28.661583    3438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:27:28.665968    3438 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:27:28.666018    3438 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:27:28.669468    3438 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:27:28.677536    3438 start.go:298] selected driver: qemu2
	I1025 14:27:28.677541    3438 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:27:28.677548    3438 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:27:28.679942    3438 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:27:28.683552    3438 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:27:28.686639    3438 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:27:28.686660    3438 cni.go:84] Creating CNI manager for ""
	I1025 14:27:28.686671    3438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:27:28.686678    3438 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:27:28.686685    3438 start_flags.go:323] config:
	{Name:test-preload-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:27:28.691441    3438 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.698577    3438 out.go:177] * Starting control plane node test-preload-570000 in cluster test-preload-570000
	I1025 14:27:28.702578    3438 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1025 14:27:28.702680    3438 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/test-preload-570000/config.json ...
	I1025 14:27:28.702705    3438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/test-preload-570000/config.json: {Name:mk67b30646492eca4e58ffca145480d0b9ec8a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:27:28.702718    3438 cache.go:107] acquiring lock: {Name:mkb35cb21a42ff8ed731669b39590bafefcc2df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702741    3438 cache.go:107] acquiring lock: {Name:mkd9ad8c7aed91a005f014b93a97b85fb5d6e7d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702731    3438 cache.go:107] acquiring lock: {Name:mk971251ea1c0943d84d5707c64e4b288df20f7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702865    3438 cache.go:107] acquiring lock: {Name:mkebf0d40f5825f7c5443794b2ec3975badc79da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702927    3438 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:27:28.702937    3438 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 14:27:28.702944    3438 cache.go:107] acquiring lock: {Name:mk3045f0957565235e9688f90c8a344a36a2dbf2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702959    3438 cache.go:107] acquiring lock: {Name:mk7debd1b5e0a561107c8d49ce49bae797ee2869 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.702973    3438 start.go:365] acquiring machines lock for test-preload-570000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:27:28.702930    3438 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 14:27:28.703010    3438 start.go:369] acquired machines lock for "test-preload-570000" in 29.584µs
	I1025 14:27:28.703034    3438 cache.go:107] acquiring lock: {Name:mk405f2d09c7593291d2a0c47a02082b18f0c9c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.703044    3438 cache.go:107] acquiring lock: {Name:mk1ba5c2f27b6e694997170b9819a596df31698f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:27:28.703024    3438 start.go:93] Provisioning new machine with config: &{Name:test-preload-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:27:28.703125    3438 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:27:28.703211    3438 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 14:27:28.703282    3438 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1025 14:27:28.710538    3438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:27:28.703246    3438 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 14:27:28.703277    3438 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 14:27:28.703605    3438 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 14:27:28.711386    3438 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 14:27:28.711531    3438 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 14:27:28.715112    3438 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 14:27:28.715164    3438 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 14:27:28.715221    3438 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 14:27:28.719039    3438 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 14:27:28.719121    3438 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 14:27:28.719178    3438 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 14:27:28.728405    3438 start.go:159] libmachine.API.Create for "test-preload-570000" (driver="qemu2")
	I1025 14:27:28.728421    3438 client.go:168] LocalClient.Create starting
	I1025 14:27:28.728498    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:27:28.728531    3438 main.go:141] libmachine: Decoding PEM data...
	I1025 14:27:28.728542    3438 main.go:141] libmachine: Parsing certificate...
	I1025 14:27:28.728572    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:27:28.728591    3438 main.go:141] libmachine: Decoding PEM data...
	I1025 14:27:28.728599    3438 main.go:141] libmachine: Parsing certificate...
	I1025 14:27:28.728959    3438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:27:28.850933    3438 main.go:141] libmachine: Creating SSH key...
	I1025 14:27:28.992506    3438 main.go:141] libmachine: Creating Disk image...
	I1025 14:27:28.992528    3438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:27:28.992741    3438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:29.005573    3438 main.go:141] libmachine: STDOUT: 
	I1025 14:27:29.005602    3438 main.go:141] libmachine: STDERR: 
	I1025 14:27:29.005675    3438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2 +20000M
	I1025 14:27:29.017483    3438 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:27:29.017512    3438 main.go:141] libmachine: STDERR: 
	I1025 14:27:29.017526    3438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:29.017533    3438 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:27:29.017573    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:aa:0e:74:01:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:29.019428    3438 main.go:141] libmachine: STDOUT: 
	I1025 14:27:29.019443    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:27:29.019463    3438 client.go:171] LocalClient.Create took 291.03775ms
	I1025 14:27:29.571907    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1025 14:27:29.608005    3438 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 14:27:29.608026    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 14:27:29.724693    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 14:27:29.801081    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 14:27:29.801100    3438 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.098386208s
	I1025 14:27:29.801116    3438 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 14:27:29.841128    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1025 14:27:30.093803    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 14:27:30.222042    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1025 14:27:30.222069    3438 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.519230667s
	I1025 14:27:30.222078    3438 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1025 14:27:30.281731    3438 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 14:27:30.281755    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 14:27:30.554070    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1025 14:27:30.788223    3438 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1025 14:27:31.020280    3438 start.go:128] duration metric: createHost completed in 2.317116708s
	I1025 14:27:31.020325    3438 start.go:83] releasing machines lock for "test-preload-570000", held for 2.317304792s
	W1025 14:27:31.020381    3438 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:31.032973    3438 out.go:177] * Deleting "test-preload-570000" in qemu2 ...
	W1025 14:27:31.058198    3438 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:31.058370    3438 start.go:706] Will try again in 5 seconds ...
	I1025 14:27:31.729698    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1025 14:27:31.729766    3438 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.026789166s
	I1025 14:27:31.729797    3438 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1025 14:27:32.289788    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1025 14:27:32.289846    3438 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.586878791s
	I1025 14:27:32.289896    3438 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1025 14:27:33.329019    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1025 14:27:33.329061    3438 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.626347375s
	I1025 14:27:33.329085    3438 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1025 14:27:33.330182    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1025 14:27:33.330218    3438 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.627498916s
	I1025 14:27:33.330248    3438 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1025 14:27:34.579421    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1025 14:27:34.579472    3438 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.876521208s
	I1025 14:27:34.579514    3438 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1025 14:27:36.059012    3438 start.go:365] acquiring machines lock for test-preload-570000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:27:36.059391    3438 start.go:369] acquired machines lock for "test-preload-570000" in 301.291µs
	I1025 14:27:36.059486    3438 start.go:93] Provisioning new machine with config: &{Name:test-preload-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-570000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:27:36.059762    3438 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:27:36.070307    3438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:27:36.116708    3438 start.go:159] libmachine.API.Create for "test-preload-570000" (driver="qemu2")
	I1025 14:27:36.116746    3438 client.go:168] LocalClient.Create starting
	I1025 14:27:36.116852    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:27:36.116902    3438 main.go:141] libmachine: Decoding PEM data...
	I1025 14:27:36.116919    3438 main.go:141] libmachine: Parsing certificate...
	I1025 14:27:36.117008    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:27:36.117042    3438 main.go:141] libmachine: Decoding PEM data...
	I1025 14:27:36.117055    3438 main.go:141] libmachine: Parsing certificate...
	I1025 14:27:36.117501    3438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:27:36.249702    3438 main.go:141] libmachine: Creating SSH key...
	I1025 14:27:36.289569    3438 main.go:141] libmachine: Creating Disk image...
	I1025 14:27:36.289580    3438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:27:36.289751    3438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:36.301997    3438 main.go:141] libmachine: STDOUT: 
	I1025 14:27:36.302013    3438 main.go:141] libmachine: STDERR: 
	I1025 14:27:36.302070    3438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2 +20000M
	I1025 14:27:36.312920    3438 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:27:36.312939    3438 main.go:141] libmachine: STDERR: 
	I1025 14:27:36.312952    3438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:36.312963    3438 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:27:36.312999    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:42:96:26:83:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/test-preload-570000/disk.qcow2
	I1025 14:27:36.314844    3438 main.go:141] libmachine: STDOUT: 
	I1025 14:27:36.314858    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:27:36.314871    3438 client.go:171] LocalClient.Create took 198.119583ms
	I1025 14:27:38.144473    3438 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1025 14:27:38.144544    3438 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.44154s
	I1025 14:27:38.144591    3438 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1025 14:27:38.144639    3438 cache.go:87] Successfully saved all images to host disk.
	I1025 14:27:38.317045    3438 start.go:128] duration metric: createHost completed in 2.257261875s
	I1025 14:27:38.317137    3438 start.go:83] releasing machines lock for "test-preload-570000", held for 2.257687s
	W1025 14:27:38.317378    3438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:27:38.330758    3438 out.go:177] 
	W1025 14:27:38.334843    3438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:27:38.334898    3438 out.go:239] * 
	* 
	W1025 14:27:38.337952    3438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:27:38.347781    3438 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-570000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-10-25 14:27:38.365435 -0700 PDT m=+1049.232354043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-570000 -n test-preload-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-570000 -n test-preload-570000: exit status 7 (68.83375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-570000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-570000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-570000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (9.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-501000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-501000 --memory=2048 --driver=qemu2 : exit status 80 (9.722989959s)

                                                
                                                
-- stdout --
	* [scheduled-stop-501000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-501000 in cluster scheduled-stop-501000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-501000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-501000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-501000 in cluster scheduled-stop-501000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-501000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-25 14:27:48.258196 -0700 PDT m=+1059.125112501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-501000 -n scheduled-stop-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-501000 -n scheduled-stop-501000: exit status 7 (68.907542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-501000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-501000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-501000
--- FAIL: TestScheduledStopUnix (9.90s)

                                                
                                    
x
+
TestSkaffold (12.07s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3223045050 version
E1025 14:27:49.815252    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-253000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-253000 --memory=2600 --driver=qemu2 : exit status 80 (9.793057792s)

                                                
                                                
-- stdout --
	* [skaffold-253000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-253000 in cluster skaffold-253000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-253000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-253000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-253000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-253000 in cluster skaffold-253000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-253000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-253000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-10-25 14:28:00.335367 -0700 PDT m=+1071.202280543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-253000 -n skaffold-253000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-253000 -n skaffold-253000: exit status 7 (66.286792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-253000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-253000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-253000
--- FAIL: TestSkaffold (12.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (157.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-25 14:31:17.681033 -0700 PDT m=+1268.547903459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-616000 -n running-upgrade-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-616000 -n running-upgrade-616000: exit status 85 (86.999625ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-616000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-616000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-616000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-616000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-616000\"")
helpers_test.go:175: Cleaning up "running-upgrade-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-616000
--- FAIL: TestRunningBinaryUpgrade (157.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.796694042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-874000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-874000 in cluster kubernetes-upgrade-874000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:31:18.043697    4002 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:31:18.043897    4002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:31:18.043900    4002 out.go:309] Setting ErrFile to fd 2...
	I1025 14:31:18.043903    4002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:31:18.044028    4002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:31:18.045041    4002 out.go:303] Setting JSON to false
	I1025 14:31:18.061074    4002 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1852,"bootTime":1698267626,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:31:18.061156    4002 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:31:18.064866    4002 out.go:177] * [kubernetes-upgrade-874000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:31:18.071791    4002 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:31:18.075741    4002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:31:18.071863    4002 notify.go:220] Checking for updates...
	I1025 14:31:18.081730    4002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:31:18.084733    4002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:31:18.086077    4002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:31:18.088704    4002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:31:18.092120    4002 config.go:182] Loaded profile config "cert-expiration-410000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:31:18.092183    4002 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:31:18.092225    4002 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:31:18.096547    4002 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:31:18.103691    4002 start.go:298] selected driver: qemu2
	I1025 14:31:18.103697    4002 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:31:18.103703    4002 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:31:18.105963    4002 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:31:18.108779    4002 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:31:18.111850    4002 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:31:18.111869    4002 cni.go:84] Creating CNI manager for ""
	I1025 14:31:18.111878    4002 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:31:18.111883    4002 start_flags.go:323] config:
	{Name:kubernetes-upgrade-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-874000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:31:18.116492    4002 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:31:18.122687    4002 out.go:177] * Starting control plane node kubernetes-upgrade-874000 in cluster kubernetes-upgrade-874000
	I1025 14:31:18.126698    4002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:31:18.126712    4002 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:31:18.126720    4002 cache.go:56] Caching tarball of preloaded images
	I1025 14:31:18.126771    4002 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:31:18.126777    4002 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:31:18.126853    4002 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kubernetes-upgrade-874000/config.json ...
	I1025 14:31:18.126864    4002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kubernetes-upgrade-874000/config.json: {Name:mkb16324cb5045a756a043cee1ea76b2465f3ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:31:18.127069    4002 start.go:365] acquiring machines lock for kubernetes-upgrade-874000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:31:18.127099    4002 start.go:369] acquired machines lock for "kubernetes-upgrade-874000" in 24.083µs
	I1025 14:31:18.127110    4002 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-874000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:31:18.127139    4002 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:31:18.135710    4002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:31:18.151305    4002 start.go:159] libmachine.API.Create for "kubernetes-upgrade-874000" (driver="qemu2")
	I1025 14:31:18.151325    4002 client.go:168] LocalClient.Create starting
	I1025 14:31:18.151397    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:31:18.151426    4002 main.go:141] libmachine: Decoding PEM data...
	I1025 14:31:18.151435    4002 main.go:141] libmachine: Parsing certificate...
	I1025 14:31:18.151475    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:31:18.151493    4002 main.go:141] libmachine: Decoding PEM data...
	I1025 14:31:18.151498    4002 main.go:141] libmachine: Parsing certificate...
	I1025 14:31:18.151814    4002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:31:18.277606    4002 main.go:141] libmachine: Creating SSH key...
	I1025 14:31:18.339899    4002 main.go:141] libmachine: Creating Disk image...
	I1025 14:31:18.339904    4002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:31:18.340081    4002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:18.352474    4002 main.go:141] libmachine: STDOUT: 
	I1025 14:31:18.352489    4002 main.go:141] libmachine: STDERR: 
	I1025 14:31:18.352551    4002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2 +20000M
	I1025 14:31:18.363003    4002 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:31:18.363021    4002 main.go:141] libmachine: STDERR: 
	I1025 14:31:18.363038    4002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:18.363045    4002 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:31:18.363072    4002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f9:06:b5:0a:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:18.364830    4002 main.go:141] libmachine: STDOUT: 
	I1025 14:31:18.364842    4002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:31:18.364860    4002 client.go:171] LocalClient.Create took 213.529334ms
	I1025 14:31:20.367066    4002 start.go:128] duration metric: createHost completed in 2.239898625s
	I1025 14:31:20.367152    4002 start.go:83] releasing machines lock for "kubernetes-upgrade-874000", held for 2.24004275s
	W1025 14:31:20.367203    4002 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:31:20.375498    4002 out.go:177] * Deleting "kubernetes-upgrade-874000" in qemu2 ...
	W1025 14:31:20.405513    4002 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:31:20.405554    4002 start.go:706] Will try again in 5 seconds ...
	I1025 14:31:25.407871    4002 start.go:365] acquiring machines lock for kubernetes-upgrade-874000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:31:25.408296    4002 start.go:369] acquired machines lock for "kubernetes-upgrade-874000" in 317.875µs
	I1025 14:31:25.408419    4002 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-874000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:31:25.408610    4002 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:31:25.418147    4002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:31:25.469343    4002 start.go:159] libmachine.API.Create for "kubernetes-upgrade-874000" (driver="qemu2")
	I1025 14:31:25.469391    4002 client.go:168] LocalClient.Create starting
	I1025 14:31:25.469510    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:31:25.469574    4002 main.go:141] libmachine: Decoding PEM data...
	I1025 14:31:25.469599    4002 main.go:141] libmachine: Parsing certificate...
	I1025 14:31:25.469680    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:31:25.469722    4002 main.go:141] libmachine: Decoding PEM data...
	I1025 14:31:25.469736    4002 main.go:141] libmachine: Parsing certificate...
	I1025 14:31:25.470217    4002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:31:25.604767    4002 main.go:141] libmachine: Creating SSH key...
	I1025 14:31:25.740575    4002 main.go:141] libmachine: Creating Disk image...
	I1025 14:31:25.740583    4002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:31:25.740740    4002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:25.752720    4002 main.go:141] libmachine: STDOUT: 
	I1025 14:31:25.752739    4002 main.go:141] libmachine: STDERR: 
	I1025 14:31:25.752795    4002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2 +20000M
	I1025 14:31:25.763251    4002 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:31:25.763279    4002 main.go:141] libmachine: STDERR: 
	I1025 14:31:25.763298    4002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:25.763313    4002 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:31:25.763365    4002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:eb:78:98:c6:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:25.765163    4002 main.go:141] libmachine: STDOUT: 
	I1025 14:31:25.765177    4002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:31:25.765190    4002 client.go:171] LocalClient.Create took 295.793208ms
	I1025 14:31:27.767385    4002 start.go:128] duration metric: createHost completed in 2.35873375s
	I1025 14:31:27.767457    4002 start.go:83] releasing machines lock for "kubernetes-upgrade-874000", held for 2.359136042s
	W1025 14:31:27.767903    4002 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:31:27.779607    4002 out.go:177] 
	W1025 14:31:27.782732    4002 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:31:27.782765    4002 out.go:239] * 
	* 
	W1025 14:31:27.785383    4002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:31:27.794512    4002 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-874000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-874000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-874000 status --format={{.Host}}: exit status 7 (37.104334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.194339334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-874000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-874000 in cluster kubernetes-upgrade-874000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:31:27.981467    4026 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:31:27.981642    4026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:31:27.981647    4026 out.go:309] Setting ErrFile to fd 2...
	I1025 14:31:27.981649    4026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:31:27.981767    4026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:31:27.982725    4026 out.go:303] Setting JSON to false
	I1025 14:31:27.998667    4026 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1861,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:31:27.998733    4026 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:31:28.003673    4026 out.go:177] * [kubernetes-upgrade-874000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:31:28.010665    4026 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:31:28.014672    4026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:31:28.010755    4026 notify.go:220] Checking for updates...
	I1025 14:31:28.020639    4026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:31:28.028610    4026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:31:28.031642    4026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:31:28.035650    4026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:31:28.039043    4026 config.go:182] Loaded profile config "kubernetes-upgrade-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 14:31:28.039331    4026 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:31:28.043477    4026 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:31:28.050641    4026 start.go:298] selected driver: qemu2
	I1025 14:31:28.050648    4026 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-874000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:31:28.050703    4026 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:31:28.053176    4026 cni.go:84] Creating CNI manager for ""
	I1025 14:31:28.053190    4026 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:31:28.053196    4026 start_flags.go:323] config:
	{Name:kubernetes-upgrade-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kubernetes-upgrade-874000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:31:28.057583    4026 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:31:28.064658    4026 out.go:177] * Starting control plane node kubernetes-upgrade-874000 in cluster kubernetes-upgrade-874000
	I1025 14:31:28.068713    4026 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:31:28.068729    4026 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:31:28.068741    4026 cache.go:56] Caching tarball of preloaded images
	I1025 14:31:28.068808    4026 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:31:28.068813    4026 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:31:28.068885    4026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kubernetes-upgrade-874000/config.json ...
	I1025 14:31:28.069320    4026 start.go:365] acquiring machines lock for kubernetes-upgrade-874000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:31:28.069345    4026 start.go:369] acquired machines lock for "kubernetes-upgrade-874000" in 19.334µs
	I1025 14:31:28.069353    4026 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:31:28.069358    4026 fix.go:54] fixHost starting: 
	I1025 14:31:28.069471    4026 fix.go:102] recreateIfNeeded on kubernetes-upgrade-874000: state=Stopped err=<nil>
	W1025 14:31:28.069479    4026 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:31:28.077682    4026 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-874000" ...
	I1025 14:31:28.081609    4026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:eb:78:98:c6:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:28.083733    4026 main.go:141] libmachine: STDOUT: 
	I1025 14:31:28.083751    4026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:31:28.083780    4026 fix.go:56] fixHost completed within 14.421542ms
	I1025 14:31:28.083785    4026 start.go:83] releasing machines lock for "kubernetes-upgrade-874000", held for 14.435792ms
	W1025 14:31:28.083791    4026 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:31:28.083822    4026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:31:28.083826    4026 start.go:706] Will try again in 5 seconds ...
	I1025 14:31:33.086053    4026 start.go:365] acquiring machines lock for kubernetes-upgrade-874000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:31:33.086399    4026 start.go:369] acquired machines lock for "kubernetes-upgrade-874000" in 245.417µs
	I1025 14:31:33.086546    4026 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:31:33.086566    4026 fix.go:54] fixHost starting: 
	I1025 14:31:33.087282    4026 fix.go:102] recreateIfNeeded on kubernetes-upgrade-874000: state=Stopped err=<nil>
	W1025 14:31:33.087307    4026 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:31:33.096806    4026 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-874000" ...
	I1025 14:31:33.100986    4026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:eb:78:98:c6:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubernetes-upgrade-874000/disk.qcow2
	I1025 14:31:33.110838    4026 main.go:141] libmachine: STDOUT: 
	I1025 14:31:33.110891    4026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:31:33.110982    4026 fix.go:56] fixHost completed within 24.418458ms
	I1025 14:31:33.111001    4026 start.go:83] releasing machines lock for "kubernetes-upgrade-874000", held for 24.580083ms
	W1025 14:31:33.111211    4026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:31:33.117754    4026 out.go:177] 
	W1025 14:31:33.121855    4026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:31:33.121897    4026 out.go:239] * 
	* 
	W1025 14:31:33.124600    4026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:31:33.132620    4026 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-874000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-874000 version --output=json: exit status 1 (65.119541ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-874000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-25 14:31:33.212537 -0700 PDT m=+1284.079403959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-874000 -n kubernetes-upgrade-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-874000 -n kubernetes-upgrade-874000: exit status 7 (37.246625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-874000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-874000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-874000
--- FAIL: TestKubernetesUpgrade (15.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17488
- KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3393530630/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17488
- KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3046526881/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (157.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (157.28s)

                                                
                                    
x
+
TestPause/serial/Start (9.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-270000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-270000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.816870584s)

                                                
                                                
-- stdout --
	* [pause-270000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-270000 in cluster pause-270000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-270000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-270000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-270000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-270000 -n pause-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-270000 -n pause-270000: exit status 7 (70.682708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-270000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 : exit status 80 (9.815098375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-354000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-354000 in cluster NoKubernetes-354000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-354000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000: exit status 7 (77.053084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244765417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-354000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-354000
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-354000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000: exit status 7 (73.142292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 
E1025 14:32:08.748999    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251137875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-354000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-354000
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-354000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000: exit status 7 (71.416625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 : exit status 80 (5.265909709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-354000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-354000
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-354000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-354000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-354000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-354000 -n NoKubernetes-354000: exit status 7 (69.803958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.753673292s)

                                                
                                                
-- stdout --
	* [auto-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-475000 in cluster auto-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:32:17.247743    4169 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:32:17.247897    4169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:17.247901    4169 out.go:309] Setting ErrFile to fd 2...
	I1025 14:32:17.247903    4169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:17.248028    4169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:32:17.249032    4169 out.go:303] Setting JSON to false
	I1025 14:32:17.265011    4169 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1911,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:32:17.265098    4169 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:32:17.270499    4169 out.go:177] * [auto-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:32:17.274415    4169 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:32:17.274453    4169 notify.go:220] Checking for updates...
	I1025 14:32:17.278412    4169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:32:17.282207    4169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:32:17.285366    4169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:32:17.288427    4169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:32:17.291418    4169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:32:17.294797    4169 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:32:17.294834    4169 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:32:17.299381    4169 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:32:17.306395    4169 start.go:298] selected driver: qemu2
	I1025 14:32:17.306403    4169 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:32:17.306409    4169 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:32:17.308736    4169 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:32:17.311425    4169 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:32:17.314477    4169 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:32:17.314498    4169 cni.go:84] Creating CNI manager for ""
	I1025 14:32:17.314504    4169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:32:17.314508    4169 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:32:17.314514    4169 start_flags.go:323] config:
	{Name:auto-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:auto-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s GPUs:}
	I1025 14:32:17.318840    4169 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:32:17.326431    4169 out.go:177] * Starting control plane node auto-475000 in cluster auto-475000
	I1025 14:32:17.329312    4169 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:32:17.329328    4169 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:32:17.329340    4169 cache.go:56] Caching tarball of preloaded images
	I1025 14:32:17.329402    4169 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:32:17.329408    4169 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:32:17.329469    4169 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/auto-475000/config.json ...
	I1025 14:32:17.329481    4169 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/auto-475000/config.json: {Name:mk9d39924e3978285a91e06e2e3f7de3b01537e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:32:17.329680    4169 start.go:365] acquiring machines lock for auto-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:17.329708    4169 start.go:369] acquired machines lock for "auto-475000" in 22.875µs
	I1025 14:32:17.329717    4169 start.go:93] Provisioning new machine with config: &{Name:auto-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:auto-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:17.329749    4169 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:17.337363    4169 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:17.354025    4169 start.go:159] libmachine.API.Create for "auto-475000" (driver="qemu2")
	I1025 14:32:17.354047    4169 client.go:168] LocalClient.Create starting
	I1025 14:32:17.354109    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:17.354136    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:17.354152    4169 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:17.354185    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:17.354205    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:17.354211    4169 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:17.354535    4169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:17.477396    4169 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:17.555360    4169 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:17.555368    4169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:17.555535    4169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:17.567443    4169 main.go:141] libmachine: STDOUT: 
	I1025 14:32:17.567459    4169 main.go:141] libmachine: STDERR: 
	I1025 14:32:17.567519    4169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2 +20000M
	I1025 14:32:17.578210    4169 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:17.578221    4169 main.go:141] libmachine: STDERR: 
	I1025 14:32:17.578237    4169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:17.578246    4169 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:17.578282    4169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:90:ea:b6:cc:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:17.580081    4169 main.go:141] libmachine: STDOUT: 
	I1025 14:32:17.580094    4169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:17.580112    4169 client.go:171] LocalClient.Create took 226.057292ms
	I1025 14:32:19.582283    4169 start.go:128] duration metric: createHost completed in 2.252540958s
	I1025 14:32:19.582327    4169 start.go:83] releasing machines lock for "auto-475000", held for 2.2526395s
	W1025 14:32:19.582352    4169 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:19.587465    4169 out.go:177] * Deleting "auto-475000" in qemu2 ...
	W1025 14:32:19.606948    4169 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:19.606977    4169 start.go:706] Will try again in 5 seconds ...
	I1025 14:32:24.609144    4169 start.go:365] acquiring machines lock for auto-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:24.609605    4169 start.go:369] acquired machines lock for "auto-475000" in 350.333µs
	I1025 14:32:24.609740    4169 start.go:93] Provisioning new machine with config: &{Name:auto-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:auto-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:24.609942    4169 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:24.620594    4169 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:24.668729    4169 start.go:159] libmachine.API.Create for "auto-475000" (driver="qemu2")
	I1025 14:32:24.668772    4169 client.go:168] LocalClient.Create starting
	I1025 14:32:24.668876    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:24.668946    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:24.668963    4169 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:24.669027    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:24.669060    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:24.669072    4169 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:24.669540    4169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:24.804515    4169 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:24.899423    4169 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:24.899429    4169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:24.899587    4169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:24.911810    4169 main.go:141] libmachine: STDOUT: 
	I1025 14:32:24.911826    4169 main.go:141] libmachine: STDERR: 
	I1025 14:32:24.911883    4169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2 +20000M
	I1025 14:32:24.922527    4169 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:24.922541    4169 main.go:141] libmachine: STDERR: 
	I1025 14:32:24.922562    4169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:24.922567    4169 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:24.922613    4169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:91:4a:53:7e:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/auto-475000/disk.qcow2
	I1025 14:32:24.924301    4169 main.go:141] libmachine: STDOUT: 
	I1025 14:32:24.924315    4169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:24.924326    4169 client.go:171] LocalClient.Create took 255.5525ms
	I1025 14:32:26.926472    4169 start.go:128] duration metric: createHost completed in 2.316504084s
	I1025 14:32:26.926529    4169 start.go:83] releasing machines lock for "auto-475000", held for 2.31692975s
	W1025 14:32:26.926926    4169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:26.941983    4169 out.go:177] 
	W1025 14:32:26.944672    4169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:32:26.944692    4169 out.go:239] * 
	* 
	W1025 14:32:26.946184    4169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:32:26.957545    4169 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E1025 14:32:36.457969    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/ingress-addon-legacy-187000/client.crt: no such file or directory
E1025 14:32:36.705566    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.75714325s)

                                                
                                                
-- stdout --
	* [flannel-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-475000 in cluster flannel-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:32:29.204758    4281 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:32:29.204892    4281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:29.204895    4281 out.go:309] Setting ErrFile to fd 2...
	I1025 14:32:29.204898    4281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:29.205019    4281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:32:29.206073    4281 out.go:303] Setting JSON to false
	I1025 14:32:29.221944    4281 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1923,"bootTime":1698267626,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:32:29.222025    4281 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:32:29.228495    4281 out.go:177] * [flannel-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:32:29.240476    4281 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:32:29.236593    4281 notify.go:220] Checking for updates...
	I1025 14:32:29.246448    4281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:32:29.249557    4281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:32:29.252572    4281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:32:29.255532    4281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:32:29.258518    4281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:32:29.261948    4281 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:32:29.261986    4281 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:32:29.266416    4281 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:32:29.273475    4281 start.go:298] selected driver: qemu2
	I1025 14:32:29.273482    4281 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:32:29.273487    4281 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:32:29.275824    4281 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:32:29.278418    4281 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:32:29.281641    4281 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:32:29.281670    4281 cni.go:84] Creating CNI manager for "flannel"
	I1025 14:32:29.281674    4281 start_flags.go:318] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1025 14:32:29.281680    4281 start_flags.go:323] config:
	{Name:flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:32:29.286457    4281 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:32:29.293480    4281 out.go:177] * Starting control plane node flannel-475000 in cluster flannel-475000
	I1025 14:32:29.297497    4281 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:32:29.297513    4281 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:32:29.297524    4281 cache.go:56] Caching tarball of preloaded images
	I1025 14:32:29.297582    4281 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:32:29.297589    4281 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:32:29.297655    4281 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/flannel-475000/config.json ...
	I1025 14:32:29.297667    4281 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/flannel-475000/config.json: {Name:mkc219eacdfa5d9afb1bf324df73e42e12617267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:32:29.297873    4281 start.go:365] acquiring machines lock for flannel-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:29.297903    4281 start.go:369] acquired machines lock for "flannel-475000" in 23.916µs
	I1025 14:32:29.297913    4281 start.go:93] Provisioning new machine with config: &{Name:flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:29.297942    4281 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:29.306535    4281 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:29.323708    4281 start.go:159] libmachine.API.Create for "flannel-475000" (driver="qemu2")
	I1025 14:32:29.323733    4281 client.go:168] LocalClient.Create starting
	I1025 14:32:29.323810    4281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:29.323834    4281 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:29.323846    4281 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:29.323878    4281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:29.323897    4281 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:29.323904    4281 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:29.324233    4281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:29.446126    4281 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:29.502271    4281 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:29.502277    4281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:29.502419    4281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:29.514747    4281 main.go:141] libmachine: STDOUT: 
	I1025 14:32:29.514761    4281 main.go:141] libmachine: STDERR: 
	I1025 14:32:29.514827    4281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2 +20000M
	I1025 14:32:29.525407    4281 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:29.525424    4281 main.go:141] libmachine: STDERR: 
	I1025 14:32:29.525443    4281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:29.525449    4281 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:29.525477    4281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:05:7d:ba:6c:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:29.527272    4281 main.go:141] libmachine: STDOUT: 
	I1025 14:32:29.527284    4281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:29.527304    4281 client.go:171] LocalClient.Create took 203.566875ms
	I1025 14:32:31.529501    4281 start.go:128] duration metric: createHost completed in 2.23155075s
	I1025 14:32:31.529594    4281 start.go:83] releasing machines lock for "flannel-475000", held for 2.231709333s
	W1025 14:32:31.529649    4281 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:31.536888    4281 out.go:177] * Deleting "flannel-475000" in qemu2 ...
	W1025 14:32:31.564494    4281 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:31.564533    4281 start.go:706] Will try again in 5 seconds ...
	I1025 14:32:36.566724    4281 start.go:365] acquiring machines lock for flannel-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:36.566985    4281 start.go:369] acquired machines lock for "flannel-475000" in 187.458µs
	I1025 14:32:36.567074    4281 start.go:93] Provisioning new machine with config: &{Name:flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:36.567246    4281 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:36.574686    4281 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:36.615819    4281 start.go:159] libmachine.API.Create for "flannel-475000" (driver="qemu2")
	I1025 14:32:36.615871    4281 client.go:168] LocalClient.Create starting
	I1025 14:32:36.615978    4281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:36.616028    4281 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:36.616043    4281 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:36.616111    4281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:36.616149    4281 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:36.616163    4281 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:36.616691    4281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:36.753333    4281 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:36.864057    4281 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:36.864064    4281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:36.864227    4281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:36.876834    4281 main.go:141] libmachine: STDOUT: 
	I1025 14:32:36.876848    4281 main.go:141] libmachine: STDERR: 
	I1025 14:32:36.876910    4281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2 +20000M
	I1025 14:32:36.887990    4281 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:36.888003    4281 main.go:141] libmachine: STDERR: 
	I1025 14:32:36.888017    4281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:36.888024    4281 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:36.888053    4281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b0:0c:d7:60:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/flannel-475000/disk.qcow2
	I1025 14:32:36.889815    4281 main.go:141] libmachine: STDOUT: 
	I1025 14:32:36.889829    4281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:36.889839    4281 client.go:171] LocalClient.Create took 273.965042ms
	I1025 14:32:38.892007    4281 start.go:128] duration metric: createHost completed in 2.324759708s
	I1025 14:32:38.892073    4281 start.go:83] releasing machines lock for "flannel-475000", held for 2.325102667s
	W1025 14:32:38.892371    4281 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:38.901850    4281 out.go:177] 
	W1025 14:32:38.906481    4281 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:32:38.906508    4281 out.go:239] * 
	* 
	W1025 14:32:38.907658    4281 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:32:38.918330    4281 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.843635792s)

                                                
                                                
-- stdout --
	* [kindnet-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-475000 in cluster kindnet-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:32:41.333940    4406 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:32:41.334082    4406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:41.334086    4406 out.go:309] Setting ErrFile to fd 2...
	I1025 14:32:41.334089    4406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:41.334212    4406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:32:41.335252    4406 out.go:303] Setting JSON to false
	I1025 14:32:41.351446    4406 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1935,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:32:41.351547    4406 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:32:41.356984    4406 out.go:177] * [kindnet-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:32:41.364845    4406 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:32:41.364911    4406 notify.go:220] Checking for updates...
	I1025 14:32:41.371718    4406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:32:41.374793    4406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:32:41.377838    4406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:32:41.380799    4406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:32:41.383809    4406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:32:41.387097    4406 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:32:41.387143    4406 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:32:41.391763    4406 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:32:41.398792    4406 start.go:298] selected driver: qemu2
	I1025 14:32:41.398799    4406 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:32:41.398805    4406 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:32:41.401109    4406 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:32:41.403694    4406 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:32:41.406865    4406 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:32:41.406893    4406 cni.go:84] Creating CNI manager for "kindnet"
	I1025 14:32:41.406897    4406 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 14:32:41.406903    4406 start_flags.go:323] config:
	{Name:kindnet-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:32:41.411472    4406 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:32:41.422769    4406 out.go:177] * Starting control plane node kindnet-475000 in cluster kindnet-475000
	I1025 14:32:41.426728    4406 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:32:41.426742    4406 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:32:41.426749    4406 cache.go:56] Caching tarball of preloaded images
	I1025 14:32:41.426811    4406 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:32:41.426816    4406 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:32:41.426871    4406 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kindnet-475000/config.json ...
	I1025 14:32:41.426883    4406 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kindnet-475000/config.json: {Name:mkd79b9a6f7e810c2ddb5ce306e8d7f052c20f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:32:41.427086    4406 start.go:365] acquiring machines lock for kindnet-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:41.427117    4406 start.go:369] acquired machines lock for "kindnet-475000" in 25.041µs
	I1025 14:32:41.427127    4406 start.go:93] Provisioning new machine with config: &{Name:kindnet-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:41.427162    4406 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:41.435821    4406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:41.453359    4406 start.go:159] libmachine.API.Create for "kindnet-475000" (driver="qemu2")
	I1025 14:32:41.453379    4406 client.go:168] LocalClient.Create starting
	I1025 14:32:41.453434    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:41.453463    4406 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:41.453473    4406 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:41.453510    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:41.453528    4406 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:41.453536    4406 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:41.453872    4406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:41.576092    4406 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:41.689764    4406 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:41.689769    4406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:41.689938    4406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:41.702176    4406 main.go:141] libmachine: STDOUT: 
	I1025 14:32:41.702192    4406 main.go:141] libmachine: STDERR: 
	I1025 14:32:41.702252    4406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2 +20000M
	I1025 14:32:41.712632    4406 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:41.712662    4406 main.go:141] libmachine: STDERR: 
	I1025 14:32:41.712679    4406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:41.712687    4406 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:41.712720    4406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d7:7c:22:5c:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:41.714393    4406 main.go:141] libmachine: STDOUT: 
	I1025 14:32:41.714405    4406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:41.714423    4406 client.go:171] LocalClient.Create took 261.043375ms
	I1025 14:32:43.716617    4406 start.go:128] duration metric: createHost completed in 2.289458667s
	I1025 14:32:43.716678    4406 start.go:83] releasing machines lock for "kindnet-475000", held for 2.289579458s
	W1025 14:32:43.716747    4406 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:43.726640    4406 out.go:177] * Deleting "kindnet-475000" in qemu2 ...
	W1025 14:32:43.752204    4406 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:43.752234    4406 start.go:706] Will try again in 5 seconds ...
	I1025 14:32:48.754341    4406 start.go:365] acquiring machines lock for kindnet-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:48.754687    4406 start.go:369] acquired machines lock for "kindnet-475000" in 264.042µs
	I1025 14:32:48.754779    4406 start.go:93] Provisioning new machine with config: &{Name:kindnet-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:48.755042    4406 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:48.764625    4406 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:48.813158    4406 start.go:159] libmachine.API.Create for "kindnet-475000" (driver="qemu2")
	I1025 14:32:48.813212    4406 client.go:168] LocalClient.Create starting
	I1025 14:32:48.813331    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:48.813393    4406 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:48.813415    4406 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:48.813490    4406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:48.813538    4406 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:48.813561    4406 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:48.814159    4406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:48.948497    4406 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:49.076646    4406 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:49.076653    4406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:49.076791    4406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:49.088946    4406 main.go:141] libmachine: STDOUT: 
	I1025 14:32:49.088979    4406 main.go:141] libmachine: STDERR: 
	I1025 14:32:49.089063    4406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2 +20000M
	I1025 14:32:49.099629    4406 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:49.099644    4406 main.go:141] libmachine: STDERR: 
	I1025 14:32:49.099659    4406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:49.099668    4406 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:49.099711    4406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:86:e6:82:27:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kindnet-475000/disk.qcow2
	I1025 14:32:49.101436    4406 main.go:141] libmachine: STDOUT: 
	I1025 14:32:49.101449    4406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:49.101467    4406 client.go:171] LocalClient.Create took 288.253625ms
	I1025 14:32:51.103675    4406 start.go:128] duration metric: createHost completed in 2.348619625s
	I1025 14:32:51.103915    4406 start.go:83] releasing machines lock for "kindnet-475000", held for 2.349098875s
	W1025 14:32:51.104428    4406 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:51.116120    4406 out.go:177] 
	W1025 14:32:51.119110    4406 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:32:51.119165    4406 out.go:239] * 
	* 
	W1025 14:32:51.121694    4406 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:32:51.132059    4406 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.717834667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-475000 in cluster enable-default-cni-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:32:53.531669    4524 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:32:53.531830    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:53.531833    4524 out.go:309] Setting ErrFile to fd 2...
	I1025 14:32:53.531836    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:32:53.531970    4524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:32:53.532981    4524 out.go:303] Setting JSON to false
	I1025 14:32:53.549069    4524 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1947,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:32:53.549137    4524 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:32:53.555439    4524 out.go:177] * [enable-default-cni-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:32:53.563464    4524 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:32:53.567456    4524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:32:53.563517    4524 notify.go:220] Checking for updates...
	I1025 14:32:53.571391    4524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:32:53.574500    4524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:32:53.577452    4524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:32:53.580443    4524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:32:53.583838    4524 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:32:53.583884    4524 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:32:53.588494    4524 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:32:53.595428    4524 start.go:298] selected driver: qemu2
	I1025 14:32:53.595438    4524 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:32:53.595445    4524 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:32:53.597920    4524 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:32:53.601437    4524 out.go:177] * Automatically selected the socket_vmnet network
	E1025 14:32:53.604600    4524 start_flags.go:457] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1025 14:32:53.604614    4524 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:32:53.604644    4524 cni.go:84] Creating CNI manager for "bridge"
	I1025 14:32:53.604650    4524 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:32:53.604657    4524 start_flags.go:323] config:
	{Name:enable-default-cni-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:32:53.609408    4524 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:32:53.615331    4524 out.go:177] * Starting control plane node enable-default-cni-475000 in cluster enable-default-cni-475000
	I1025 14:32:53.619480    4524 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:32:53.619498    4524 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:32:53.619511    4524 cache.go:56] Caching tarball of preloaded images
	I1025 14:32:53.619576    4524 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:32:53.619583    4524 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:32:53.619654    4524 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/enable-default-cni-475000/config.json ...
	I1025 14:32:53.619672    4524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/enable-default-cni-475000/config.json: {Name:mkc5ee9409cc385cbad29c485e05ed58d8e6d6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:32:53.619879    4524 start.go:365] acquiring machines lock for enable-default-cni-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:32:53.619912    4524 start.go:369] acquired machines lock for "enable-default-cni-475000" in 25.25µs
	I1025 14:32:53.619923    4524 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:32:53.619961    4524 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:32:53.623466    4524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:32:53.640362    4524 start.go:159] libmachine.API.Create for "enable-default-cni-475000" (driver="qemu2")
	I1025 14:32:53.640383    4524 client.go:168] LocalClient.Create starting
	I1025 14:32:53.640431    4524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:32:53.640459    4524 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:53.640467    4524 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:53.640504    4524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:32:53.640522    4524 main.go:141] libmachine: Decoding PEM data...
	I1025 14:32:53.640529    4524 main.go:141] libmachine: Parsing certificate...
	I1025 14:32:53.640856    4524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:32:53.762709    4524 main.go:141] libmachine: Creating SSH key...
	I1025 14:32:53.832049    4524 main.go:141] libmachine: Creating Disk image...
	I1025 14:32:53.832054    4524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:32:53.832225    4524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:32:53.844411    4524 main.go:141] libmachine: STDOUT: 
	I1025 14:32:53.844426    4524 main.go:141] libmachine: STDERR: 
	I1025 14:32:53.844492    4524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2 +20000M
	I1025 14:32:53.854891    4524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:32:53.854906    4524 main.go:141] libmachine: STDERR: 
	I1025 14:32:53.854923    4524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:32:53.854931    4524 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:32:53.854964    4524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:e5:94:7f:9c:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:32:53.856715    4524 main.go:141] libmachine: STDOUT: 
	I1025 14:32:53.856726    4524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:32:53.856744    4524 client.go:171] LocalClient.Create took 216.359167ms
	I1025 14:32:55.858903    4524 start.go:128] duration metric: createHost completed in 2.238946542s
	I1025 14:32:55.858985    4524 start.go:83] releasing machines lock for "enable-default-cni-475000", held for 2.239089583s
	W1025 14:32:55.859030    4524 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:55.874175    4524 out.go:177] * Deleting "enable-default-cni-475000" in qemu2 ...
	W1025 14:32:55.898035    4524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:32:55.898069    4524 start.go:706] Will try again in 5 seconds ...
	I1025 14:33:00.900327    4524 start.go:365] acquiring machines lock for enable-default-cni-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:00.900757    4524 start.go:369] acquired machines lock for "enable-default-cni-475000" in 328.083µs
	I1025 14:33:00.900890    4524 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:00.901120    4524 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:00.910667    4524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:00.960016    4524 start.go:159] libmachine.API.Create for "enable-default-cni-475000" (driver="qemu2")
	I1025 14:33:00.960070    4524 client.go:168] LocalClient.Create starting
	I1025 14:33:00.960193    4524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:00.960259    4524 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:00.960276    4524 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:00.960334    4524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:00.960370    4524 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:00.960386    4524 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:00.960897    4524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:01.096650    4524 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:01.143451    4524 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:01.143457    4524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:01.143610    4524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:33:01.155919    4524 main.go:141] libmachine: STDOUT: 
	I1025 14:33:01.155936    4524 main.go:141] libmachine: STDERR: 
	I1025 14:33:01.156007    4524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2 +20000M
	I1025 14:33:01.166358    4524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:01.166375    4524 main.go:141] libmachine: STDERR: 
	I1025 14:33:01.166390    4524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:33:01.166398    4524 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:01.166428    4524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:f9:a8:5c:6d:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/enable-default-cni-475000/disk.qcow2
	I1025 14:33:01.168178    4524 main.go:141] libmachine: STDOUT: 
	I1025 14:33:01.168199    4524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:01.168211    4524 client.go:171] LocalClient.Create took 208.137416ms
	I1025 14:33:03.170369    4524 start.go:128] duration metric: createHost completed in 2.269251416s
	I1025 14:33:03.170420    4524 start.go:83] releasing machines lock for "enable-default-cni-475000", held for 2.269666375s
	W1025 14:33:03.170789    4524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:03.183480    4524 out.go:177] 
	W1025 14:33:03.187517    4524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:33:03.187541    4524 out.go:239] * 
	* 
	W1025 14:33:03.190264    4524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:33:03.200407    4524 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.802336375s)

                                                
                                                
-- stdout --
	* [bridge-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-475000 in cluster bridge-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:33:05.523536    4634 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:33:05.523676    4634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:05.523679    4634 out.go:309] Setting ErrFile to fd 2...
	I1025 14:33:05.523682    4634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:05.523817    4634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:33:05.524829    4634 out.go:303] Setting JSON to false
	I1025 14:33:05.541132    4634 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1959,"bootTime":1698267626,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:33:05.541199    4634 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:33:05.547105    4634 out.go:177] * [bridge-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:33:05.554770    4634 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:33:05.558959    4634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:33:05.554825    4634 notify.go:220] Checking for updates...
	I1025 14:33:05.563115    4634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:33:05.565911    4634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:33:05.568910    4634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:33:05.571945    4634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:33:05.575343    4634 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:33:05.575385    4634 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:33:05.579909    4634 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:33:05.586929    4634 start.go:298] selected driver: qemu2
	I1025 14:33:05.586937    4634 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:33:05.586943    4634 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:33:05.589377    4634 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:33:05.591948    4634 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:33:05.595019    4634 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:33:05.595037    4634 cni.go:84] Creating CNI manager for "bridge"
	I1025 14:33:05.595041    4634 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:33:05.595047    4634 start_flags.go:323] config:
	{Name:bridge-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:bridge-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:33:05.599550    4634 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:33:05.606920    4634 out.go:177] * Starting control plane node bridge-475000 in cluster bridge-475000
	I1025 14:33:05.609903    4634 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:33:05.609921    4634 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:33:05.609928    4634 cache.go:56] Caching tarball of preloaded images
	I1025 14:33:05.609976    4634 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:33:05.609981    4634 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:33:05.610036    4634 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/bridge-475000/config.json ...
	I1025 14:33:05.610047    4634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/bridge-475000/config.json: {Name:mk799b91ee70db1559ffa655c2a596fe021c7f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:33:05.610240    4634 start.go:365] acquiring machines lock for bridge-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:05.610269    4634 start.go:369] acquired machines lock for "bridge-475000" in 22.958µs
	I1025 14:33:05.610279    4634 start.go:93] Provisioning new machine with config: &{Name:bridge-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:bridge-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:05.610313    4634 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:05.614968    4634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:05.630987    4634 start.go:159] libmachine.API.Create for "bridge-475000" (driver="qemu2")
	I1025 14:33:05.631016    4634 client.go:168] LocalClient.Create starting
	I1025 14:33:05.631067    4634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:05.631092    4634 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:05.631100    4634 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:05.631133    4634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:05.631149    4634 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:05.631157    4634 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:05.631454    4634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:05.753147    4634 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:05.868347    4634 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:05.868352    4634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:05.868508    4634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:05.880861    4634 main.go:141] libmachine: STDOUT: 
	I1025 14:33:05.880874    4634 main.go:141] libmachine: STDERR: 
	I1025 14:33:05.880925    4634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2 +20000M
	I1025 14:33:05.891477    4634 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:05.891496    4634 main.go:141] libmachine: STDERR: 
	I1025 14:33:05.891519    4634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:05.891525    4634 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:05.891551    4634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4a:0b:11:83:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:05.893379    4634 main.go:141] libmachine: STDOUT: 
	I1025 14:33:05.893391    4634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:05.893410    4634 client.go:171] LocalClient.Create took 262.391667ms
	I1025 14:33:07.895565    4634 start.go:128] duration metric: createHost completed in 2.285256833s
	I1025 14:33:07.895635    4634 start.go:83] releasing machines lock for "bridge-475000", held for 2.28538475s
	W1025 14:33:07.895687    4634 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:07.909612    4634 out.go:177] * Deleting "bridge-475000" in qemu2 ...
	W1025 14:33:07.932152    4634 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:07.932227    4634 start.go:706] Will try again in 5 seconds ...
	I1025 14:33:12.934446    4634 start.go:365] acquiring machines lock for bridge-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:12.934876    4634 start.go:369] acquired machines lock for "bridge-475000" in 333.042µs
	I1025 14:33:12.934995    4634 start.go:93] Provisioning new machine with config: &{Name:bridge-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:bridge-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:12.935323    4634 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:12.946017    4634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:12.995917    4634 start.go:159] libmachine.API.Create for "bridge-475000" (driver="qemu2")
	I1025 14:33:12.995973    4634 client.go:168] LocalClient.Create starting
	I1025 14:33:12.996095    4634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:12.996143    4634 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:12.996161    4634 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:12.996221    4634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:12.996257    4634 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:12.996269    4634 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:12.996789    4634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:13.129874    4634 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:13.227085    4634 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:13.227096    4634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:13.227260    4634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:13.239674    4634 main.go:141] libmachine: STDOUT: 
	I1025 14:33:13.239695    4634 main.go:141] libmachine: STDERR: 
	I1025 14:33:13.239759    4634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2 +20000M
	I1025 14:33:13.250461    4634 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:13.250475    4634 main.go:141] libmachine: STDERR: 
	I1025 14:33:13.250495    4634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:13.250504    4634 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:13.250543    4634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:49:f4:62:10:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/bridge-475000/disk.qcow2
	I1025 14:33:13.252311    4634 main.go:141] libmachine: STDOUT: 
	I1025 14:33:13.252325    4634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:13.252339    4634 client.go:171] LocalClient.Create took 256.362875ms
	I1025 14:33:15.254491    4634 start.go:128] duration metric: createHost completed in 2.319167709s
	I1025 14:33:15.254555    4634 start.go:83] releasing machines lock for "bridge-475000", held for 2.319683959s
	W1025 14:33:15.254922    4634 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:15.263496    4634 out.go:177] 
	W1025 14:33:15.268592    4634 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:33:15.268616    4634 out.go:239] * 
	* 
	W1025 14:33:15.271239    4634 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:33:15.280512    4634 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.858582292s)

                                                
                                                
-- stdout --
	* [kubenet-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-475000 in cluster kubenet-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:33:17.588560    4744 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:33:17.588705    4744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:17.588708    4744 out.go:309] Setting ErrFile to fd 2...
	I1025 14:33:17.588710    4744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:17.588862    4744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:33:17.589864    4744 out.go:303] Setting JSON to false
	I1025 14:33:17.605863    4744 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1971,"bootTime":1698267626,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:33:17.605948    4744 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:33:17.611322    4744 out.go:177] * [kubenet-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:33:17.623298    4744 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:33:17.619327    4744 notify.go:220] Checking for updates...
	I1025 14:33:17.629334    4744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:33:17.632347    4744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:33:17.635343    4744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:33:17.638375    4744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:33:17.641361    4744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:33:17.644757    4744 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:33:17.644806    4744 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:33:17.649210    4744 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:33:17.656306    4744 start.go:298] selected driver: qemu2
	I1025 14:33:17.656316    4744 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:33:17.656323    4744 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:33:17.658655    4744 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:33:17.662207    4744 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:33:17.665434    4744 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:33:17.665451    4744 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1025 14:33:17.665456    4744 start_flags.go:323] config:
	{Name:kubenet-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kubenet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:33:17.669991    4744 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:33:17.677333    4744 out.go:177] * Starting control plane node kubenet-475000 in cluster kubenet-475000
	I1025 14:33:17.681310    4744 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:33:17.681326    4744 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:33:17.681338    4744 cache.go:56] Caching tarball of preloaded images
	I1025 14:33:17.681395    4744 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:33:17.681401    4744 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:33:17.681480    4744 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kubenet-475000/config.json ...
	I1025 14:33:17.681498    4744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/kubenet-475000/config.json: {Name:mk227ffb7d8074a1763b31b884f37181c55353c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:33:17.681700    4744 start.go:365] acquiring machines lock for kubenet-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:17.681731    4744 start.go:369] acquired machines lock for "kubenet-475000" in 25.125µs
	I1025 14:33:17.681741    4744 start.go:93] Provisioning new machine with config: &{Name:kubenet-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:kubenet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:17.681768    4744 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:17.690275    4744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:17.707756    4744 start.go:159] libmachine.API.Create for "kubenet-475000" (driver="qemu2")
	I1025 14:33:17.707782    4744 client.go:168] LocalClient.Create starting
	I1025 14:33:17.707835    4744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:17.707867    4744 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:17.707875    4744 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:17.707912    4744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:17.707931    4744 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:17.707938    4744 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:17.708261    4744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:17.830692    4744 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:17.881142    4744 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:17.881150    4744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:17.881309    4744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:17.893534    4744 main.go:141] libmachine: STDOUT: 
	I1025 14:33:17.893552    4744 main.go:141] libmachine: STDERR: 
	I1025 14:33:17.893613    4744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2 +20000M
	I1025 14:33:17.904226    4744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:17.904239    4744 main.go:141] libmachine: STDERR: 
	I1025 14:33:17.904257    4744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:17.904263    4744 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:17.904294    4744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:60:23:91:d7:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:17.906074    4744 main.go:141] libmachine: STDOUT: 
	I1025 14:33:17.906086    4744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:17.906105    4744 client.go:171] LocalClient.Create took 198.320417ms
	I1025 14:33:19.908281    4744 start.go:128] duration metric: createHost completed in 2.226513542s
	I1025 14:33:19.908353    4744 start.go:83] releasing machines lock for "kubenet-475000", held for 2.226640666s
	W1025 14:33:19.908411    4744 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:19.918592    4744 out.go:177] * Deleting "kubenet-475000" in qemu2 ...
	W1025 14:33:19.942903    4744 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:19.942932    4744 start.go:706] Will try again in 5 seconds ...
	I1025 14:33:24.945196    4744 start.go:365] acquiring machines lock for kubenet-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:24.945595    4744 start.go:369] acquired machines lock for "kubenet-475000" in 304.334µs
	I1025 14:33:24.945824    4744 start.go:93] Provisioning new machine with config: &{Name:kubenet-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:kubenet-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:24.946084    4744 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:24.953642    4744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:24.999788    4744 start.go:159] libmachine.API.Create for "kubenet-475000" (driver="qemu2")
	I1025 14:33:24.999841    4744 client.go:168] LocalClient.Create starting
	I1025 14:33:24.999955    4744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:25.000011    4744 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:25.000026    4744 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:25.000084    4744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:25.000117    4744 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:25.000131    4744 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:25.000647    4744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:25.129849    4744 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:25.343376    4744 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:25.343384    4744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:25.343544    4744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:25.356283    4744 main.go:141] libmachine: STDOUT: 
	I1025 14:33:25.356301    4744 main.go:141] libmachine: STDERR: 
	I1025 14:33:25.356373    4744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2 +20000M
	I1025 14:33:25.366896    4744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:25.366909    4744 main.go:141] libmachine: STDERR: 
	I1025 14:33:25.366928    4744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:25.366939    4744 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:25.366984    4744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:cc:6d:f6:bd:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/kubenet-475000/disk.qcow2
	I1025 14:33:25.368691    4744 main.go:141] libmachine: STDOUT: 
	I1025 14:33:25.368703    4744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:25.368714    4744 client.go:171] LocalClient.Create took 368.872833ms
	I1025 14:33:27.370872    4744 start.go:128] duration metric: createHost completed in 2.424766208s
	I1025 14:33:27.370936    4744 start.go:83] releasing machines lock for "kubenet-475000", held for 2.425348291s
	W1025 14:33:27.371364    4744 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:27.384053    4744 out.go:177] 
	W1025 14:33:27.388185    4744 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:33:27.388225    4744 out.go:239] * 
	* 
	W1025 14:33:27.390876    4744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:33:27.402024    4744 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.768540333s)

                                                
                                                
-- stdout --
	* [custom-flannel-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-475000 in cluster custom-flannel-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:33:29.708868    4867 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:33:29.709021    4867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:29.709024    4867 out.go:309] Setting ErrFile to fd 2...
	I1025 14:33:29.709026    4867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:29.709151    4867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:33:29.710227    4867 out.go:303] Setting JSON to false
	I1025 14:33:29.726466    4867 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1983,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:33:29.726558    4867 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:33:29.731870    4867 out.go:177] * [custom-flannel-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:33:29.738889    4867 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:33:29.738961    4867 notify.go:220] Checking for updates...
	I1025 14:33:29.742849    4867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:33:29.745876    4867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:33:29.748915    4867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:33:29.752018    4867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:33:29.754879    4867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:33:29.758272    4867 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:33:29.758319    4867 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:33:29.762769    4867 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:33:29.769818    4867 start.go:298] selected driver: qemu2
	I1025 14:33:29.769824    4867 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:33:29.769829    4867 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:33:29.772124    4867 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:33:29.774795    4867 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:33:29.777964    4867 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:33:29.778012    4867 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1025 14:33:29.778020    4867 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1025 14:33:29.778025    4867 start_flags.go:323] config:
	{Name:custom-flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:custom-flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:33:29.782623    4867 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:33:29.789817    4867 out.go:177] * Starting control plane node custom-flannel-475000 in cluster custom-flannel-475000
	I1025 14:33:29.793836    4867 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:33:29.793853    4867 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:33:29.793864    4867 cache.go:56] Caching tarball of preloaded images
	I1025 14:33:29.793915    4867 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:33:29.793920    4867 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:33:29.793987    4867 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/custom-flannel-475000/config.json ...
	I1025 14:33:29.793999    4867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/custom-flannel-475000/config.json: {Name:mk85c756c5e6b3ffd1313d496bb2bb89d6bb5e37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:33:29.794206    4867 start.go:365] acquiring machines lock for custom-flannel-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:29.794235    4867 start.go:369] acquired machines lock for "custom-flannel-475000" in 23.375µs
	I1025 14:33:29.794245    4867 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.3 ClusterName:custom-flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:29.794277    4867 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:29.802902    4867 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:29.818712    4867 start.go:159] libmachine.API.Create for "custom-flannel-475000" (driver="qemu2")
	I1025 14:33:29.818743    4867 client.go:168] LocalClient.Create starting
	I1025 14:33:29.818804    4867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:29.818828    4867 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:29.818838    4867 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:29.818875    4867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:29.818893    4867 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:29.818899    4867 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:29.819188    4867 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:29.940926    4867 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:30.033232    4867 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:30.033237    4867 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:30.033384    4867 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:30.045253    4867 main.go:141] libmachine: STDOUT: 
	I1025 14:33:30.045271    4867 main.go:141] libmachine: STDERR: 
	I1025 14:33:30.045322    4867 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2 +20000M
	I1025 14:33:30.055701    4867 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:30.055715    4867 main.go:141] libmachine: STDERR: 
	I1025 14:33:30.055731    4867 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:30.055738    4867 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:30.055773    4867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:7b:c3:c8:5e:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:30.057395    4867 main.go:141] libmachine: STDOUT: 
	I1025 14:33:30.057409    4867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:30.057426    4867 client.go:171] LocalClient.Create took 238.679917ms
	I1025 14:33:32.059614    4867 start.go:128] duration metric: createHost completed in 2.265337458s
	I1025 14:33:32.059715    4867 start.go:83] releasing machines lock for "custom-flannel-475000", held for 2.265494416s
	W1025 14:33:32.059759    4867 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:32.073919    4867 out.go:177] * Deleting "custom-flannel-475000" in qemu2 ...
	W1025 14:33:32.096607    4867 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:32.096642    4867 start.go:706] Will try again in 5 seconds ...
	I1025 14:33:37.098877    4867 start.go:365] acquiring machines lock for custom-flannel-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:37.099288    4867 start.go:369] acquired machines lock for "custom-flannel-475000" in 303.291µs
	I1025 14:33:37.099410    4867 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.3 ClusterName:custom-flannel-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:37.099797    4867 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:37.109367    4867 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:37.157349    4867 start.go:159] libmachine.API.Create for "custom-flannel-475000" (driver="qemu2")
	I1025 14:33:37.157397    4867 client.go:168] LocalClient.Create starting
	I1025 14:33:37.157495    4867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:37.157551    4867 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:37.157567    4867 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:37.157626    4867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:37.157661    4867 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:37.157679    4867 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:37.158163    4867 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:37.292717    4867 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:37.375813    4867 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:37.375819    4867 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:37.375990    4867 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:37.388269    4867 main.go:141] libmachine: STDOUT: 
	I1025 14:33:37.388299    4867 main.go:141] libmachine: STDERR: 
	I1025 14:33:37.388356    4867 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2 +20000M
	I1025 14:33:37.399072    4867 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:37.399091    4867 main.go:141] libmachine: STDERR: 
	I1025 14:33:37.399109    4867 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:37.399116    4867 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:37.399414    4867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:ae:f3:53:8a:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/custom-flannel-475000/disk.qcow2
	I1025 14:33:37.401584    4867 main.go:141] libmachine: STDOUT: 
	I1025 14:33:37.401600    4867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:37.401613    4867 client.go:171] LocalClient.Create took 244.210125ms
	I1025 14:33:39.403858    4867 start.go:128] duration metric: createHost completed in 2.304016584s
	I1025 14:33:39.403938    4867 start.go:83] releasing machines lock for "custom-flannel-475000", held for 2.304651292s
	W1025 14:33:39.404324    4867 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:39.413859    4867 out.go:177] 
	W1025 14:33:39.419971    4867 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:33:39.420012    4867 out.go:239] * 
	* 
	W1025 14:33:39.422644    4867 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:33:39.431879    4867 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.954735375s)

                                                
                                                
-- stdout --
	* [calico-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-475000 in cluster calico-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:33:41.907953    4994 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:33:41.908070    4994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:41.908072    4994 out.go:309] Setting ErrFile to fd 2...
	I1025 14:33:41.908077    4994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:41.908224    4994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:33:41.909298    4994 out.go:303] Setting JSON to false
	I1025 14:33:41.925328    4994 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1995,"bootTime":1698267626,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:33:41.925405    4994 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:33:41.929960    4994 out.go:177] * [calico-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:33:41.937810    4994 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:33:41.941813    4994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:33:41.937887    4994 notify.go:220] Checking for updates...
	I1025 14:33:41.947778    4994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:33:41.950831    4994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:33:41.953845    4994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:33:41.956763    4994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:33:41.960156    4994 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:33:41.960205    4994 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:33:41.964802    4994 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:33:41.971805    4994 start.go:298] selected driver: qemu2
	I1025 14:33:41.971812    4994 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:33:41.971818    4994 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:33:41.974126    4994 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:33:41.976818    4994 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:33:41.979877    4994 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:33:41.979894    4994 cni.go:84] Creating CNI manager for "calico"
	I1025 14:33:41.979897    4994 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 14:33:41.979904    4994 start_flags.go:323] config:
	{Name:calico-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:calico-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:33:41.984449    4994 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:33:41.991712    4994 out.go:177] * Starting control plane node calico-475000 in cluster calico-475000
	I1025 14:33:41.995762    4994 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:33:41.995777    4994 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:33:41.995785    4994 cache.go:56] Caching tarball of preloaded images
	I1025 14:33:41.995832    4994 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:33:41.995838    4994 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:33:41.995891    4994 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/calico-475000/config.json ...
	I1025 14:33:41.995901    4994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/calico-475000/config.json: {Name:mkfac48c445a61b72078da24d8253ee7045c3d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:33:41.996096    4994 start.go:365] acquiring machines lock for calico-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:41.996125    4994 start.go:369] acquired machines lock for "calico-475000" in 23.292µs
	I1025 14:33:41.996135    4994 start.go:93] Provisioning new machine with config: &{Name:calico-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:calico-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:41.996173    4994 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:42.004746    4994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:42.021146    4994 start.go:159] libmachine.API.Create for "calico-475000" (driver="qemu2")
	I1025 14:33:42.021173    4994 client.go:168] LocalClient.Create starting
	I1025 14:33:42.021233    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:42.021265    4994 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:42.021276    4994 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:42.021312    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:42.021331    4994 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:42.021339    4994 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:42.021722    4994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:42.143543    4994 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:42.425996    4994 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:42.426006    4994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:42.426179    4994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:42.438756    4994 main.go:141] libmachine: STDOUT: 
	I1025 14:33:42.438782    4994 main.go:141] libmachine: STDERR: 
	I1025 14:33:42.438860    4994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2 +20000M
	I1025 14:33:42.449534    4994 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:42.449562    4994 main.go:141] libmachine: STDERR: 
	I1025 14:33:42.449581    4994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:42.449590    4994 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:42.449627    4994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:1a:71:3b:9f:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:42.451433    4994 main.go:141] libmachine: STDOUT: 
	I1025 14:33:42.451447    4994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:42.451470    4994 client.go:171] LocalClient.Create took 430.297292ms
	I1025 14:33:44.453620    4994 start.go:128] duration metric: createHost completed in 2.457458208s
	I1025 14:33:44.453694    4994 start.go:83] releasing machines lock for "calico-475000", held for 2.457589334s
	W1025 14:33:44.453771    4994 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:44.464923    4994 out.go:177] * Deleting "calico-475000" in qemu2 ...
	W1025 14:33:44.488808    4994 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:44.488839    4994 start.go:706] Will try again in 5 seconds ...
	I1025 14:33:49.491008    4994 start.go:365] acquiring machines lock for calico-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:49.491398    4994 start.go:369] acquired machines lock for "calico-475000" in 298.041µs
	I1025 14:33:49.491523    4994 start.go:93] Provisioning new machine with config: &{Name:calico-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:calico-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:49.491869    4994 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:49.500071    4994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:49.548993    4994 start.go:159] libmachine.API.Create for "calico-475000" (driver="qemu2")
	I1025 14:33:49.549026    4994 client.go:168] LocalClient.Create starting
	I1025 14:33:49.549124    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:49.549184    4994 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:49.549208    4994 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:49.549270    4994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:49.549309    4994 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:49.549323    4994 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:49.549832    4994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:49.683488    4994 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:49.762578    4994 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:49.762584    4994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:49.762749    4994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:49.774953    4994 main.go:141] libmachine: STDOUT: 
	I1025 14:33:49.774969    4994 main.go:141] libmachine: STDERR: 
	I1025 14:33:49.775041    4994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2 +20000M
	I1025 14:33:49.785371    4994 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:49.785386    4994 main.go:141] libmachine: STDERR: 
	I1025 14:33:49.785401    4994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:49.785408    4994 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:49.785449    4994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:14:66:b6:17:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/calico-475000/disk.qcow2
	I1025 14:33:49.787134    4994 main.go:141] libmachine: STDOUT: 
	I1025 14:33:49.787149    4994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:49.787161    4994 client.go:171] LocalClient.Create took 238.133709ms
	I1025 14:33:51.789355    4994 start.go:128] duration metric: createHost completed in 2.297464083s
	I1025 14:33:51.789452    4994 start.go:83] releasing machines lock for "calico-475000", held for 2.298056s
	W1025 14:33:51.789960    4994 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:51.799517    4994 out.go:177] 
	W1025 14:33:51.803779    4994 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:33:51.803806    4994 out.go:239] * 
	* 
	W1025 14:33:51.806539    4994 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:33:51.817668    4994 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E1025 14:33:59.777805    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-475000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.864294083s)

                                                
                                                
-- stdout --
	* [false-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-475000 in cluster false-475000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:33:54.309116    5114 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:33:54.309263    5114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:54.309266    5114 out.go:309] Setting ErrFile to fd 2...
	I1025 14:33:54.309268    5114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:33:54.309406    5114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:33:54.310474    5114 out.go:303] Setting JSON to false
	I1025 14:33:54.326514    5114 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2008,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:33:54.326597    5114 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:33:54.332605    5114 out.go:177] * [false-475000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:33:54.340563    5114 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:33:54.340622    5114 notify.go:220] Checking for updates...
	I1025 14:33:54.346493    5114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:33:54.349525    5114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:33:54.350998    5114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:33:54.354492    5114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:33:54.357570    5114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:33:54.360916    5114 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:33:54.360957    5114 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:33:54.365460    5114 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:33:54.372541    5114 start.go:298] selected driver: qemu2
	I1025 14:33:54.372549    5114 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:33:54.372556    5114 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:33:54.375011    5114 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:33:54.378471    5114 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:33:54.381550    5114 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:33:54.381569    5114 cni.go:84] Creating CNI manager for "false"
	I1025 14:33:54.381574    5114 start_flags.go:323] config:
	{Name:false-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:false-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:33:54.386288    5114 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:33:54.393444    5114 out.go:177] * Starting control plane node false-475000 in cluster false-475000
	I1025 14:33:54.397532    5114 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:33:54.397549    5114 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:33:54.397560    5114 cache.go:56] Caching tarball of preloaded images
	I1025 14:33:54.397632    5114 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:33:54.397638    5114 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:33:54.397698    5114 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/false-475000/config.json ...
	I1025 14:33:54.397709    5114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/false-475000/config.json: {Name:mka49c47b0f524fc3e93503ac81ae28c4ea858e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:33:54.397964    5114 start.go:365] acquiring machines lock for false-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:33:54.397996    5114 start.go:369] acquired machines lock for "false-475000" in 25.291µs
	I1025 14:33:54.398006    5114 start.go:93] Provisioning new machine with config: &{Name:false-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:false-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:33:54.398040    5114 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:33:54.406449    5114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:33:54.423800    5114 start.go:159] libmachine.API.Create for "false-475000" (driver="qemu2")
	I1025 14:33:54.423831    5114 client.go:168] LocalClient.Create starting
	I1025 14:33:54.423906    5114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:33:54.423938    5114 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:54.423949    5114 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:54.423994    5114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:33:54.424017    5114 main.go:141] libmachine: Decoding PEM data...
	I1025 14:33:54.424026    5114 main.go:141] libmachine: Parsing certificate...
	I1025 14:33:54.424393    5114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:33:54.546995    5114 main.go:141] libmachine: Creating SSH key...
	I1025 14:33:54.701783    5114 main.go:141] libmachine: Creating Disk image...
	I1025 14:33:54.701790    5114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:33:54.701973    5114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:33:54.714542    5114 main.go:141] libmachine: STDOUT: 
	I1025 14:33:54.714559    5114 main.go:141] libmachine: STDERR: 
	I1025 14:33:54.714632    5114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2 +20000M
	I1025 14:33:54.725425    5114 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:33:54.725448    5114 main.go:141] libmachine: STDERR: 
	I1025 14:33:54.725463    5114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:33:54.725469    5114 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:33:54.725496    5114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:a7:3f:ce:61:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:33:54.727185    5114 main.go:141] libmachine: STDOUT: 
	I1025 14:33:54.727197    5114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:33:54.727220    5114 client.go:171] LocalClient.Create took 303.388083ms
	I1025 14:33:56.729374    5114 start.go:128] duration metric: createHost completed in 2.3313435s
	I1025 14:33:56.729435    5114 start.go:83] releasing machines lock for "false-475000", held for 2.331455375s
	W1025 14:33:56.729490    5114 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:56.744622    5114 out.go:177] * Deleting "false-475000" in qemu2 ...
	W1025 14:33:56.767591    5114 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:33:56.767625    5114 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:01.769880    5114 start.go:365] acquiring machines lock for false-475000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:01.770406    5114 start.go:369] acquired machines lock for "false-475000" in 424.209µs
	I1025 14:34:01.770538    5114 start.go:93] Provisioning new machine with config: &{Name:false-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:false-475000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:01.770839    5114 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:01.779514    5114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 14:34:01.825274    5114 start.go:159] libmachine.API.Create for "false-475000" (driver="qemu2")
	I1025 14:34:01.825327    5114 client.go:168] LocalClient.Create starting
	I1025 14:34:01.825444    5114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:01.825503    5114 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:01.825526    5114 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:01.825599    5114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:01.825633    5114 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:01.825644    5114 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:01.826148    5114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:01.959750    5114 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:02.076031    5114 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:02.076038    5114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:02.076207    5114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:34:02.088841    5114 main.go:141] libmachine: STDOUT: 
	I1025 14:34:02.088859    5114 main.go:141] libmachine: STDERR: 
	I1025 14:34:02.088918    5114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2 +20000M
	I1025 14:34:02.099775    5114 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:02.099803    5114 main.go:141] libmachine: STDERR: 
	I1025 14:34:02.099817    5114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:34:02.099825    5114 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:02.099867    5114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:a8:f0:01:b6:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/false-475000/disk.qcow2
	I1025 14:34:02.101594    5114 main.go:141] libmachine: STDOUT: 
	I1025 14:34:02.101608    5114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:02.101623    5114 client.go:171] LocalClient.Create took 276.294166ms
	I1025 14:34:04.103765    5114 start.go:128] duration metric: createHost completed in 2.33292575s
	I1025 14:34:04.103827    5114 start.go:83] releasing machines lock for "false-475000", held for 2.333422209s
	W1025 14:34:04.104227    5114 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:04.113134    5114 out.go:177] 
	W1025 14:34:04.117144    5114 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:04.117219    5114 out.go:239] * 
	* 
	W1025 14:34:04.119783    5114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:04.128081    5114 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E1025 14:34:06.704509    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (12.068460791s)

                                                
                                                
-- stdout --
	* [old-k8s-version-750000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-750000 in cluster old-k8s-version-750000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:06.417417    5228 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:06.417570    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:06.417573    5228 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:06.417576    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:06.417700    5228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:06.418728    5228 out.go:303] Setting JSON to false
	I1025 14:34:06.434591    5228 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2020,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:06.434664    5228 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:06.440759    5228 out.go:177] * [old-k8s-version-750000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:06.448696    5228 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:06.452758    5228 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:06.448822    5228 notify.go:220] Checking for updates...
	I1025 14:34:06.455791    5228 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:06.458791    5228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:06.461771    5228 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:06.464807    5228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:06.468102    5228 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:06.468150    5228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:06.472726    5228 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:06.478661    5228 start.go:298] selected driver: qemu2
	I1025 14:34:06.478668    5228 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:06.478674    5228 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:06.481107    5228 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:34:06.483763    5228 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:06.486847    5228 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:06.486868    5228 cni.go:84] Creating CNI manager for ""
	I1025 14:34:06.486875    5228 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:34:06.486881    5228 start_flags.go:323] config:
	{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:06.491337    5228 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:06.498804    5228 out.go:177] * Starting control plane node old-k8s-version-750000 in cluster old-k8s-version-750000
	I1025 14:34:06.502718    5228 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:34:06.502735    5228 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:34:06.502749    5228 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:06.502812    5228 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:06.502819    5228 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:34:06.502887    5228 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/old-k8s-version-750000/config.json ...
	I1025 14:34:06.502899    5228 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/old-k8s-version-750000/config.json: {Name:mk265368012209372a2df278909686ca39c441d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:06.503102    5228 start.go:365] acquiring machines lock for old-k8s-version-750000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:06.503131    5228 start.go:369] acquired machines lock for "old-k8s-version-750000" in 23.666µs
	I1025 14:34:06.503141    5228 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:06.503178    5228 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:06.511809    5228 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:06.529266    5228 start.go:159] libmachine.API.Create for "old-k8s-version-750000" (driver="qemu2")
	I1025 14:34:06.529288    5228 client.go:168] LocalClient.Create starting
	I1025 14:34:06.529338    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:06.529362    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:06.529373    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:06.529407    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:06.529426    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:06.529433    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:06.529760    5228 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:06.651689    5228 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:06.767074    5228 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:06.767080    5228 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:06.767244    5228 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.779468    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:06.779497    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:06.779553    5228 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2 +20000M
	I1025 14:34:06.790253    5228 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:06.790275    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:06.790301    5228 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.790306    5228 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:06.790353    5228 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1a:bd:3a:ea:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.792165    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:06.792177    5228 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:06.792196    5228 client.go:171] LocalClient.Create took 262.904791ms
	I1025 14:34:08.794381    5228 start.go:128] duration metric: createHost completed in 2.291201292s
	I1025 14:34:08.794460    5228 start.go:83] releasing machines lock for "old-k8s-version-750000", held for 2.291347792s
	W1025 14:34:08.794503    5228 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:08.804154    5228 out.go:177] * Deleting "old-k8s-version-750000" in qemu2 ...
	W1025 14:34:08.827495    5228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:08.827543    5228 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:13.828508    5228 start.go:365] acquiring machines lock for old-k8s-version-750000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:16.090684    5228 start.go:369] acquired machines lock for "old-k8s-version-750000" in 2.262136792s
	I1025 14:34:16.090784    5228 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:16.091056    5228 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:16.099744    5228 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:16.147551    5228 start.go:159] libmachine.API.Create for "old-k8s-version-750000" (driver="qemu2")
	I1025 14:34:16.147611    5228 client.go:168] LocalClient.Create starting
	I1025 14:34:16.147701    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:16.147751    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:16.147767    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:16.147833    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:16.147867    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:16.147878    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:16.148394    5228 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:16.292572    5228 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:16.377345    5228 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:16.377354    5228 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:16.378493    5228 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:16.390936    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:16.390961    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:16.391011    5228 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2 +20000M
	I1025 14:34:16.401743    5228 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:16.401759    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:16.401778    5228 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:16.401785    5228 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:16.401824    5228 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ff:e0:7a:af:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:16.403554    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:16.403576    5228 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:16.403591    5228 client.go:171] LocalClient.Create took 255.974458ms
	I1025 14:34:18.405713    5228 start.go:128] duration metric: createHost completed in 2.314665916s
	I1025 14:34:18.405768    5228 start.go:83] releasing machines lock for "old-k8s-version-750000", held for 2.315065917s
	W1025 14:34:18.406038    5228 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:18.422127    5228 out.go:177] 
	W1025 14:34:18.428218    5228 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:18.428244    5228 out.go:239] * 
	* 
	W1025 14:34:18.430805    5228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:18.442004    5228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (67.488916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (12.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe: permission denied (6.43225ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe: permission denied (6.136958ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe: permission denied (6.308291ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3868728883.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-867000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-867000: exit status 85 (122.803542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo docker                         | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo cat                            | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo                                | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo find                           | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-475000 sudo crio                           | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p calico-475000                                     | calico-475000          | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT | 25 Oct 23 14:33 PDT |
	| start   | -p false-475000 --memory=3072                        | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:33 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo crictl                          | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo crictl ps                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | --all                                                |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo find                            | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo ip a s                          | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	| ssh     | -p false-475000 sudo ip r s                          | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	| ssh     | -p false-475000 sudo                                 | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo iptables                        | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | -t nat -L -n -v                                      |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | status kubelet --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cat kubelet --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo                                 | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo docker                          | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo                                 | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo cat                             | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo                                 | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo systemctl                       | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo find                            | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-475000 sudo crio                            | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p false-475000                                      | false-475000           | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT | 25 Oct 23 14:34 PDT |
	| start   | -p old-k8s-version-750000                            | old-k8s-version-750000 | jenkins | v1.31.2 | 25 Oct 23 14:34 PDT |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:34:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:34:06.417417    5228 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:06.417570    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:06.417573    5228 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:06.417576    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:06.417700    5228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:06.418728    5228 out.go:303] Setting JSON to false
	I1025 14:34:06.434591    5228 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2020,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:06.434664    5228 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:06.440759    5228 out.go:177] * [old-k8s-version-750000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:06.448696    5228 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:06.452758    5228 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:06.448822    5228 notify.go:220] Checking for updates...
	I1025 14:34:06.455791    5228 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:06.458791    5228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:06.461771    5228 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:06.464807    5228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:06.468102    5228 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:06.468150    5228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:06.472726    5228 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:06.478661    5228 start.go:298] selected driver: qemu2
	I1025 14:34:06.478668    5228 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:06.478674    5228 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:06.481107    5228 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:34:06.483763    5228 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:06.486847    5228 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:06.486868    5228 cni.go:84] Creating CNI manager for ""
	I1025 14:34:06.486875    5228 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:34:06.486881    5228 start_flags.go:323] config:
	{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:06.491337    5228 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:06.498804    5228 out.go:177] * Starting control plane node old-k8s-version-750000 in cluster old-k8s-version-750000
	I1025 14:34:06.502718    5228 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:34:06.502735    5228 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:34:06.502749    5228 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:06.502812    5228 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:06.502819    5228 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:34:06.502887    5228 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/old-k8s-version-750000/config.json ...
	I1025 14:34:06.502899    5228 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/old-k8s-version-750000/config.json: {Name:mk265368012209372a2df278909686ca39c441d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:06.503102    5228 start.go:365] acquiring machines lock for old-k8s-version-750000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:06.503131    5228 start.go:369] acquired machines lock for "old-k8s-version-750000" in 23.666µs
	I1025 14:34:06.503141    5228 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:06.503178    5228 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:06.511809    5228 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:06.529266    5228 start.go:159] libmachine.API.Create for "old-k8s-version-750000" (driver="qemu2")
	I1025 14:34:06.529288    5228 client.go:168] LocalClient.Create starting
	I1025 14:34:06.529338    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:06.529362    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:06.529373    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:06.529407    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:06.529426    5228 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:06.529433    5228 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:06.529760    5228 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:06.651689    5228 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:06.767074    5228 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:06.767080    5228 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:06.767244    5228 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.779468    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:06.779497    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:06.779553    5228 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2 +20000M
	I1025 14:34:06.790253    5228 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:06.790275    5228 main.go:141] libmachine: STDERR: 
	I1025 14:34:06.790301    5228 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.790306    5228 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:06.790353    5228 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1a:bd:3a:ea:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:06.792165    5228 main.go:141] libmachine: STDOUT: 
	I1025 14:34:06.792177    5228 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:06.792196    5228 client.go:171] LocalClient.Create took 262.904791ms
	I1025 14:34:08.794381    5228 start.go:128] duration metric: createHost completed in 2.291201292s
	I1025 14:34:08.794460    5228 start.go:83] releasing machines lock for "old-k8s-version-750000", held for 2.291347792s
	W1025 14:34:08.794503    5228 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:08.804154    5228 out.go:177] * Deleting "old-k8s-version-750000" in qemu2 ...
	W1025 14:34:08.827495    5228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:08.827543    5228 start.go:706] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-867000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-867000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (9.832782333s)

                                                
                                                
-- stdout --
	* [no-preload-549000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-549000 in cluster no-preload-549000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-549000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:13.730894    5260 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:13.731039    5260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:13.731042    5260 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:13.731045    5260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:13.731163    5260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:13.732280    5260 out.go:303] Setting JSON to false
	I1025 14:34:13.748348    5260 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2027,"bootTime":1698267626,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:13.748437    5260 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:13.753523    5260 out.go:177] * [no-preload-549000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:13.760597    5260 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:13.760648    5260 notify.go:220] Checking for updates...
	I1025 14:34:13.764524    5260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:13.767594    5260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:13.770591    5260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:13.773569    5260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:13.776569    5260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:13.779864    5260 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:13.779932    5260 config.go:182] Loaded profile config "old-k8s-version-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 14:34:13.779970    5260 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:13.784520    5260 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:13.791541    5260 start.go:298] selected driver: qemu2
	I1025 14:34:13.791547    5260 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:13.791553    5260 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:13.793875    5260 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:34:13.796469    5260 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:13.799588    5260 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:13.799607    5260 cni.go:84] Creating CNI manager for ""
	I1025 14:34:13.799615    5260 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:13.799620    5260 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:34:13.799627    5260 start_flags.go:323] config:
	{Name:no-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-549000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:13.804180    5260 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.807544    5260 out.go:177] * Starting control plane node no-preload-549000 in cluster no-preload-549000
	I1025 14:34:13.815591    5260 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:13.815660    5260 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/no-preload-549000/config.json ...
	I1025 14:34:13.815676    5260 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/no-preload-549000/config.json: {Name:mkcf28da404d21c5cb61a923f94f7feb154e6266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:13.815680    5260 cache.go:107] acquiring lock: {Name:mkb35cb21a42ff8ed731669b39590bafefcc2df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815684    5260 cache.go:107] acquiring lock: {Name:mk0830dcf5664a7477cc0f6363282f1c8a16b303 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815700    5260 cache.go:107] acquiring lock: {Name:mk0fe9d4513b7e3fd22b0a9c10c524436f913769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815738    5260 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 14:34:13.815749    5260 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.333µs
	I1025 14:34:13.815758    5260 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 14:34:13.815767    5260 cache.go:107] acquiring lock: {Name:mkff10fcabe92f2bbf75b26dcaa6988ffba1c951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815798    5260 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1025 14:34:13.815820    5260 cache.go:107] acquiring lock: {Name:mk04035b5b85babdc8fb002dad4f8f6120933091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815889    5260 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1025 14:34:13.815886    5260 cache.go:107] acquiring lock: {Name:mkaa40daf5b409336bbc032f5d45dbb22e3fd23c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815852    5260 cache.go:107] acquiring lock: {Name:mkf0b72b5dd576d8dbc0837a1eceef6525efed27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.815945    5260 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1025 14:34:13.815921    5260 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1025 14:34:13.815904    5260 cache.go:107] acquiring lock: {Name:mkbb01632da13296c03dd75625bb2a6d820e3777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:13.816040    5260 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1025 14:34:13.816071    5260 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 14:34:13.816135    5260 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1025 14:34:13.816161    5260 start.go:365] acquiring machines lock for no-preload-549000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:13.816203    5260 start.go:369] acquired machines lock for "no-preload-549000" in 28.375µs
	I1025 14:34:13.816215    5260 start.go:93] Provisioning new machine with config: &{Name:no-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-549000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:13.816268    5260 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:13.824354    5260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:13.834084    5260 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1025 14:34:13.834113    5260 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1025 14:34:13.834087    5260 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1025 14:34:13.834146    5260 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 14:34:13.834298    5260 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1025 14:34:13.834741    5260 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1025 14:34:13.834868    5260 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1025 14:34:13.841383    5260 start.go:159] libmachine.API.Create for "no-preload-549000" (driver="qemu2")
	I1025 14:34:13.841399    5260 client.go:168] LocalClient.Create starting
	I1025 14:34:13.841456    5260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:13.841482    5260 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:13.841498    5260 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:13.841537    5260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:13.841556    5260 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:13.841582    5260 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:13.841907    5260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:13.968572    5260 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:14.063059    5260 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:14.063068    5260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:14.063242    5260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:14.076012    5260 main.go:141] libmachine: STDOUT: 
	I1025 14:34:14.076040    5260 main.go:141] libmachine: STDERR: 
	I1025 14:34:14.076085    5260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2 +20000M
	I1025 14:34:14.088246    5260 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:14.088263    5260 main.go:141] libmachine: STDERR: 
	I1025 14:34:14.088277    5260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:14.088283    5260 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:14.088315    5260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:6c:07:8e:91:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:14.090192    5260 main.go:141] libmachine: STDOUT: 
	I1025 14:34:14.090207    5260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:14.090225    5260 client.go:171] LocalClient.Create took 248.824167ms
	I1025 14:34:14.427499    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1025 14:34:14.466940    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I1025 14:34:14.558137    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1025 14:34:14.558150    5260 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 742.470458ms
	I1025 14:34:14.558156    5260 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1025 14:34:14.698047    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3
	I1025 14:34:14.899587    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1025 14:34:15.255048    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3
	I1025 14:34:15.345242    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I1025 14:34:15.568103    5260 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3
	I1025 14:34:16.090464    5260 start.go:128] duration metric: createHost completed in 2.274174291s
	I1025 14:34:16.090520    5260 start.go:83] releasing machines lock for "no-preload-549000", held for 2.274335458s
	W1025 14:34:16.090575    5260 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:16.114329    5260 out.go:177] * Deleting "no-preload-549000" in qemu2 ...
	W1025 14:34:16.131077    5260 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:16.131101    5260 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:17.466532    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1025 14:34:17.466572    5260 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.650845916s
	I1025 14:34:17.466590    5260 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1025 14:34:17.617367    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 exists
	I1025 14:34:17.617428    5260 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3" took 3.801663667s
	I1025 14:34:17.617470    5260 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
	I1025 14:34:18.204092    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 exists
	I1025 14:34:18.204165    5260 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3" took 4.388533792s
	I1025 14:34:18.204196    5260 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
	I1025 14:34:18.405332    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
	I1025 14:34:18.405376    5260 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3" took 4.589595875s
	I1025 14:34:18.405402    5260 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
	I1025 14:34:19.853100    5260 cache.go:157] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 exists
	I1025 14:34:19.853148    5260 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3" took 6.037355333s
	I1025 14:34:19.853175    5260 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
	I1025 14:34:21.131217    5260 start.go:365] acquiring machines lock for no-preload-549000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:21.131660    5260 start.go:369] acquired machines lock for "no-preload-549000" in 361.917µs
	I1025 14:34:21.131791    5260 start.go:93] Provisioning new machine with config: &{Name:no-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-549000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:21.132035    5260 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:21.146657    5260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:21.196399    5260 start.go:159] libmachine.API.Create for "no-preload-549000" (driver="qemu2")
	I1025 14:34:21.196458    5260 client.go:168] LocalClient.Create starting
	I1025 14:34:21.196565    5260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:21.196618    5260 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:21.196641    5260 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:21.196704    5260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:21.196731    5260 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:21.196743    5260 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:21.197234    5260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:21.335374    5260 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:21.463533    5260 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:21.463543    5260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:21.463717    5260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:21.475713    5260 main.go:141] libmachine: STDOUT: 
	I1025 14:34:21.475769    5260 main.go:141] libmachine: STDERR: 
	I1025 14:34:21.475813    5260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2 +20000M
	I1025 14:34:21.486623    5260 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:21.486642    5260 main.go:141] libmachine: STDERR: 
	I1025 14:34:21.486663    5260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:21.486673    5260 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:21.486715    5260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:36:58:b8:4f:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:21.488530    5260 main.go:141] libmachine: STDOUT: 
	I1025 14:34:21.488543    5260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:21.488560    5260 client.go:171] LocalClient.Create took 292.094792ms
	I1025 14:34:23.488805    5260 start.go:128] duration metric: createHost completed in 2.356735791s
	I1025 14:34:23.488905    5260 start.go:83] releasing machines lock for "no-preload-549000", held for 2.3572495s
	W1025 14:34:23.489146    5260 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:23.502733    5260 out.go:177] 
	W1025 14:34:23.506796    5260 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:23.506826    5260 out.go:239] * 
	* 
	W1025 14:34:23.509165    5260 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:23.517685    5260 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (67.684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-750000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-750000 create -f testdata/busybox.yaml: exit status 1 (28.515ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-750000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (32.118875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (31.808958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-750000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-750000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-750000 describe deploy/metrics-server -n kube-system: exit status 1 (25.697458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-750000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-750000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (35.422083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.21652775s)

                                                
                                                
-- stdout --
	* [old-k8s-version-750000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-750000 in cluster old-k8s-version-750000
	* Restarting existing qemu2 VM for "old-k8s-version-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:18.922993    5392 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:18.923144    5392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:18.923147    5392 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:18.923150    5392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:18.923273    5392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:18.924209    5392 out.go:303] Setting JSON to false
	I1025 14:34:18.940113    5392 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2032,"bootTime":1698267626,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:18.940213    5392 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:18.945127    5392 out.go:177] * [old-k8s-version-750000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:18.955095    5392 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:18.952162    5392 notify.go:220] Checking for updates...
	I1025 14:34:18.963068    5392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:18.969138    5392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:18.976067    5392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:18.984095    5392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:18.991935    5392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:18.996330    5392 config.go:182] Loaded profile config "old-k8s-version-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 14:34:19.000108    5392 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 14:34:19.004071    5392 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:19.007122    5392 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:34:19.015018    5392 start.go:298] selected driver: qemu2
	I1025 14:34:19.015025    5392 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:19.015085    5392 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:19.017503    5392 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:19.017530    5392 cni.go:84] Creating CNI manager for ""
	I1025 14:34:19.017540    5392 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:34:19.017546    5392 start_flags.go:323] config:
	{Name:old-k8s-version-750000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-750000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:19.021941    5392 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:19.030085    5392 out.go:177] * Starting control plane node old-k8s-version-750000 in cluster old-k8s-version-750000
	I1025 14:34:19.034108    5392 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:34:19.034122    5392 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:34:19.034131    5392 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:19.034185    5392 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:19.034190    5392 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:34:19.034261    5392 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/old-k8s-version-750000/config.json ...
	I1025 14:34:19.034558    5392 start.go:365] acquiring machines lock for old-k8s-version-750000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:19.034590    5392 start.go:369] acquired machines lock for "old-k8s-version-750000" in 26.292µs
	I1025 14:34:19.034599    5392 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:19.034606    5392 fix.go:54] fixHost starting: 
	I1025 14:34:19.034733    5392 fix.go:102] recreateIfNeeded on old-k8s-version-750000: state=Stopped err=<nil>
	W1025 14:34:19.034742    5392 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:19.039111    5392 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-750000" ...
	I1025 14:34:19.047058    5392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ff:e0:7a:af:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:19.049274    5392 main.go:141] libmachine: STDOUT: 
	I1025 14:34:19.049296    5392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:19.049326    5392 fix.go:56] fixHost completed within 14.722041ms
	I1025 14:34:19.049330    5392 start.go:83] releasing machines lock for "old-k8s-version-750000", held for 14.73525ms
	W1025 14:34:19.049335    5392 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:19.049366    5392 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:19.049371    5392 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:24.051370    5392 start.go:365] acquiring machines lock for old-k8s-version-750000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:24.051446    5392 start.go:369] acquired machines lock for "old-k8s-version-750000" in 53.625µs
	I1025 14:34:24.051468    5392 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:24.051471    5392 fix.go:54] fixHost starting: 
	I1025 14:34:24.051601    5392 fix.go:102] recreateIfNeeded on old-k8s-version-750000: state=Stopped err=<nil>
	W1025 14:34:24.051606    5392 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:24.057286    5392 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-750000" ...
	I1025 14:34:24.064360    5392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ff:e0:7a:af:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/old-k8s-version-750000/disk.qcow2
	I1025 14:34:24.066664    5392 main.go:141] libmachine: STDOUT: 
	I1025 14:34:24.066682    5392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:24.066704    5392 fix.go:56] fixHost completed within 15.232125ms
	I1025 14:34:24.066709    5392 start.go:83] releasing machines lock for "old-k8s-version-750000", held for 15.258209ms
	W1025 14:34:24.066757    5392 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:24.074126    5392 out.go:177] 
	W1025 14:34:24.086303    5392 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:24.086310    5392 out.go:239] * 
	* 
	W1025 14:34:24.086915    5392 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:24.097247    5392 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-750000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (37.096541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-549000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-549000 create -f testdata/busybox.yaml: exit status 1 (28.407042ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-549000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.83775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.547833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-549000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-549000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-549000 describe deploy/metrics-server -n kube-system: exit status 1 (25.772916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-549000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-549000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.056167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (5.21749225s)

                                                
                                                
-- stdout --
	* [no-preload-549000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-549000 in cluster no-preload-549000
	* Restarting existing qemu2 VM for "no-preload-549000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-549000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:24.003885    5426 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:24.004061    5426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.004064    5426 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:24.004066    5426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.004190    5426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:24.005216    5426 out.go:303] Setting JSON to false
	I1025 14:34:24.021390    5426 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2038,"bootTime":1698267626,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:24.021451    5426 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:24.026283    5426 out.go:177] * [no-preload-549000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:24.038251    5426 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:24.033377    5426 notify.go:220] Checking for updates...
	I1025 14:34:24.045292    5426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:24.048395    5426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:24.051322    5426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:24.057282    5426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:24.064307    5426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:24.067558    5426 config.go:182] Loaded profile config "no-preload-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:24.067827    5426 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:24.082296    5426 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:34:24.089336    5426 start.go:298] selected driver: qemu2
	I1025 14:34:24.089343    5426 start.go:902] validating driver "qemu2" against &{Name:no-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-549000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:24.089400    5426 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:24.092093    5426 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:24.092127    5426 cni.go:84] Creating CNI manager for ""
	I1025 14:34:24.092138    5426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:24.092148    5426 start_flags.go:323] config:
	{Name:no-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-549000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:24.096904    5426 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.109347    5426 out.go:177] * Starting control plane node no-preload-549000 in cluster no-preload-549000
	I1025 14:34:24.113307    5426 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:24.113426    5426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/no-preload-549000/config.json ...
	I1025 14:34:24.113649    5426 cache.go:107] acquiring lock: {Name:mkb35cb21a42ff8ed731669b39590bafefcc2df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113675    5426 cache.go:107] acquiring lock: {Name:mk0fe9d4513b7e3fd22b0a9c10c524436f913769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113689    5426 cache.go:107] acquiring lock: {Name:mkf0b72b5dd576d8dbc0837a1eceef6525efed27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113717    5426 cache.go:107] acquiring lock: {Name:mkaa40daf5b409336bbc032f5d45dbb22e3fd23c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113783    5426 cache.go:107] acquiring lock: {Name:mk04035b5b85babdc8fb002dad4f8f6120933091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113809    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 14:34:24.113809    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
	I1025 14:34:24.113825    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1025 14:34:24.113826    5426 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 262.042µs
	I1025 14:34:24.113829    5426 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3" took 176.583µs
	I1025 14:34:24.113839    5426 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
	I1025 14:34:24.113840    5426 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 14:34:24.113836    5426 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 199.083µs
	I1025 14:34:24.113863    5426 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1025 14:34:24.113867    5426 start.go:365] acquiring machines lock for no-preload-549000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:24.113852    5426 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1025 14:34:24.113910    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 exists
	I1025 14:34:24.113860    5426 cache.go:107] acquiring lock: {Name:mkff10fcabe92f2bbf75b26dcaa6988ffba1c951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.113929    5426 start.go:369] acquired machines lock for "no-preload-549000" in 32.708µs
	I1025 14:34:24.113940    5426 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:24.113946    5426 fix.go:54] fixHost starting: 
	I1025 14:34:24.113845    5426 cache.go:107] acquiring lock: {Name:mkbb01632da13296c03dd75625bb2a6d820e3777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.114008    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1025 14:34:24.114014    5426 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 157.792µs
	I1025 14:34:24.114020    5426 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1025 14:34:24.113879    5426 cache.go:107] acquiring lock: {Name:mk0830dcf5664a7477cc0f6363282f1c8a16b303 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.114059    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 exists
	I1025 14:34:24.114063    5426 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3" took 411.709µs
	I1025 14:34:24.114067    5426 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
	I1025 14:34:24.113924    5426 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3" took 293.333µs
	I1025 14:34:24.114070    5426 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
	I1025 14:34:24.114080    5426 cache.go:115] /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 exists
	I1025 14:34:24.114085    5426 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3" took 266.125µs
	I1025 14:34:24.114090    5426 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
	I1025 14:34:24.114092    5426 fix.go:102] recreateIfNeeded on no-preload-549000: state=Stopped err=<nil>
	W1025 14:34:24.114102    5426 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:24.120338    5426 out.go:177] * Restarting existing qemu2 VM for "no-preload-549000" ...
	I1025 14:34:24.125398    5426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:36:58:b8:4f:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:24.125517    5426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1025 14:34:24.128371    5426 main.go:141] libmachine: STDOUT: 
	I1025 14:34:24.128465    5426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:24.128502    5426 fix.go:56] fixHost completed within 14.556459ms
	I1025 14:34:24.128507    5426 start.go:83] releasing machines lock for "no-preload-549000", held for 14.5735ms
	W1025 14:34:24.128513    5426 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:24.128569    5426 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:24.128573    5426 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:24.732640    5426 cache.go:162] opening:  /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I1025 14:34:29.128834    5426 start.go:365] acquiring machines lock for no-preload-549000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:29.129205    5426 start.go:369] acquired machines lock for "no-preload-549000" in 283.083µs
	I1025 14:34:29.129332    5426 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:29.129362    5426 fix.go:54] fixHost starting: 
	I1025 14:34:29.130210    5426 fix.go:102] recreateIfNeeded on no-preload-549000: state=Stopped err=<nil>
	W1025 14:34:29.130240    5426 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:29.135917    5426 out.go:177] * Restarting existing qemu2 VM for "no-preload-549000" ...
	I1025 14:34:29.142669    5426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:36:58:b8:4f:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/no-preload-549000/disk.qcow2
	I1025 14:34:29.152888    5426 main.go:141] libmachine: STDOUT: 
	I1025 14:34:29.152941    5426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:29.153042    5426 fix.go:56] fixHost completed within 23.689917ms
	I1025 14:34:29.153063    5426 start.go:83] releasing machines lock for "no-preload-549000", held for 23.836083ms
	W1025 14:34:29.153305    5426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-549000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-549000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:29.160865    5426 out.go:177] 
	W1025 14:34:29.163867    5426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:29.163895    5426 out.go:239] * 
	* 
	W1025 14:34:29.166357    5426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:29.175867    5426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (68.42975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-750000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (34.954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-750000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.129667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-750000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-750000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (33.522792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-750000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-750000 "sudo crictl images -o json": exit status 89 (43.8215ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-750000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-750000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-750000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (32.986584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-750000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-750000 --alsologtostderr -v=1: exit status 89 (48.447167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-750000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:24.351700    5457 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:24.352220    5457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.352224    5457 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:24.352227    5457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.352373    5457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:24.352606    5457 out.go:303] Setting JSON to false
	I1025 14:34:24.352617    5457 mustload.go:65] Loading cluster: old-k8s-version-750000
	I1025 14:34:24.352809    5457 config.go:182] Loaded profile config "old-k8s-version-750000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 14:34:24.356721    5457 out.go:177] * The control plane node must be running for this command
	I1025 14:34:24.364641    5457 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-750000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-750000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (31.814625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (31.860792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (10.087781834s)

                                                
                                                
-- stdout --
	* [embed-certs-202000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-202000 in cluster embed-certs-202000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:24.840542    5482 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:24.840674    5482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.840676    5482 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:24.840679    5482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:24.840806    5482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:24.841901    5482 out.go:303] Setting JSON to false
	I1025 14:34:24.857859    5482 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2038,"bootTime":1698267626,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:24.857950    5482 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:24.867081    5482 out.go:177] * [embed-certs-202000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:24.871072    5482 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:24.871143    5482 notify.go:220] Checking for updates...
	I1025 14:34:24.876358    5482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:24.879006    5482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:24.882026    5482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:24.885041    5482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:24.892054    5482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:24.896248    5482 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:24.896319    5482 config.go:182] Loaded profile config "no-preload-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:24.896367    5482 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:24.900992    5482 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:24.906989    5482 start.go:298] selected driver: qemu2
	I1025 14:34:24.906996    5482 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:24.907003    5482 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:24.909385    5482 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:34:24.914004    5482 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:24.917127    5482 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:24.917156    5482 cni.go:84] Creating CNI manager for ""
	I1025 14:34:24.917165    5482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:24.917169    5482 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:34:24.917181    5482 start_flags.go:323] config:
	{Name:embed-certs-202000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:24.921806    5482 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:24.930048    5482 out.go:177] * Starting control plane node embed-certs-202000 in cluster embed-certs-202000
	I1025 14:34:24.933883    5482 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:24.933898    5482 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:24.933906    5482 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:24.933965    5482 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:24.933970    5482 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:24.934031    5482 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/embed-certs-202000/config.json ...
	I1025 14:34:24.934043    5482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/embed-certs-202000/config.json: {Name:mkb4a588e2ea0bf4cac028f908a5aff8ae8a13ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:24.934340    5482 start.go:365] acquiring machines lock for embed-certs-202000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:24.934372    5482 start.go:369] acquired machines lock for "embed-certs-202000" in 23.542µs
	I1025 14:34:24.934382    5482 start.go:93] Provisioning new machine with config: &{Name:embed-certs-202000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:24.934410    5482 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:24.942017    5482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:24.958264    5482 start.go:159] libmachine.API.Create for "embed-certs-202000" (driver="qemu2")
	I1025 14:34:24.958291    5482 client.go:168] LocalClient.Create starting
	I1025 14:34:24.958342    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:24.958377    5482 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:24.958389    5482 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:24.958427    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:24.958445    5482 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:24.958454    5482 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:24.958808    5482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:25.090078    5482 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:25.159575    5482 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:25.159583    5482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:25.159765    5482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:25.172074    5482 main.go:141] libmachine: STDOUT: 
	I1025 14:34:25.172095    5482 main.go:141] libmachine: STDERR: 
	I1025 14:34:25.172153    5482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2 +20000M
	I1025 14:34:25.182861    5482 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:25.182879    5482 main.go:141] libmachine: STDERR: 
	I1025 14:34:25.182894    5482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:25.182906    5482 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:25.182942    5482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:29:10:ee:c0:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:25.184660    5482 main.go:141] libmachine: STDOUT: 
	I1025 14:34:25.184671    5482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:25.184691    5482 client.go:171] LocalClient.Create took 226.397917ms
	I1025 14:34:27.186908    5482 start.go:128] duration metric: createHost completed in 2.252487625s
	I1025 14:34:27.186981    5482 start.go:83] releasing machines lock for "embed-certs-202000", held for 2.252627083s
	W1025 14:34:27.187030    5482 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:27.204175    5482 out.go:177] * Deleting "embed-certs-202000" in qemu2 ...
	W1025 14:34:27.230095    5482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:27.230146    5482 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:32.232325    5482 start.go:365] acquiring machines lock for embed-certs-202000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:32.562171    5482 start.go:369] acquired machines lock for "embed-certs-202000" in 329.734666ms
	I1025 14:34:32.562260    5482 start.go:93] Provisioning new machine with config: &{Name:embed-certs-202000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:32.562524    5482 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:32.572183    5482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:32.619249    5482 start.go:159] libmachine.API.Create for "embed-certs-202000" (driver="qemu2")
	I1025 14:34:32.619300    5482 client.go:168] LocalClient.Create starting
	I1025 14:34:32.619421    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:32.619478    5482 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:32.619501    5482 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:32.619566    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:32.619603    5482 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:32.619620    5482 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:32.620100    5482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:32.762597    5482 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:32.824911    5482 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:32.824917    5482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:32.825105    5482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:32.837476    5482 main.go:141] libmachine: STDOUT: 
	I1025 14:34:32.837490    5482 main.go:141] libmachine: STDERR: 
	I1025 14:34:32.837539    5482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2 +20000M
	I1025 14:34:32.848008    5482 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:32.848023    5482 main.go:141] libmachine: STDERR: 
	I1025 14:34:32.848035    5482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:32.848044    5482 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:32.848084    5482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:7f:4f:4c:18:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:32.849826    5482 main.go:141] libmachine: STDOUT: 
	I1025 14:34:32.849840    5482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:32.849851    5482 client.go:171] LocalClient.Create took 230.548917ms
	I1025 14:34:34.850387    5482 start.go:128] duration metric: createHost completed in 2.287861417s
	I1025 14:34:34.850448    5482 start.go:83] releasing machines lock for "embed-certs-202000", held for 2.2882725s
	W1025 14:34:34.850854    5482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:34.862939    5482 out.go:177] 
	W1025 14:34:34.867160    5482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:34.867210    5482 out.go:239] * 
	* 
	W1025 14:34:34.870024    5482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:34.878979    5482 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (72.221125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-549000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (33.911875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-549000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-549000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-549000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.480375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-549000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-549000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.842875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-549000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-549000 "sudo crictl images -o json": exit status 89 (41.697292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-549000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-549000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-549000"
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.342042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-549000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-549000 --alsologtostderr -v=1: exit status 89 (43.2695ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-549000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:29.457796    5506 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:29.457974    5506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:29.457977    5506 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:29.457979    5506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:29.458109    5506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:29.458314    5506 out.go:303] Setting JSON to false
	I1025 14:34:29.458322    5506 mustload.go:65] Loading cluster: no-preload-549000
	I1025 14:34:29.458526    5506 config.go:182] Loaded profile config "no-preload-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:29.462266    5506 out.go:177] * The control plane node must be running for this command
	I1025 14:34:29.466203    5506 out.go:177]   To start a cluster, run: "minikube start -p no-preload-549000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-549000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.885959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.6775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-549000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (9.90150675s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-040000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-040000 in cluster default-k8s-diff-port-040000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:30.178048    5541 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:30.178180    5541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:30.178183    5541 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:30.178185    5541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:30.178313    5541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:30.179389    5541 out.go:303] Setting JSON to false
	I1025 14:34:30.195556    5541 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2044,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:30.195637    5541 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:30.201002    5541 out.go:177] * [default-k8s-diff-port-040000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:30.208939    5541 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:30.212874    5541 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:30.209008    5541 notify.go:220] Checking for updates...
	I1025 14:34:30.218963    5541 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:30.221908    5541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:30.224807    5541 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:30.227868    5541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:30.231202    5541 config.go:182] Loaded profile config "embed-certs-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:30.231262    5541 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:30.231304    5541 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:30.235844    5541 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:30.242954    5541 start.go:298] selected driver: qemu2
	I1025 14:34:30.242963    5541 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:30.242970    5541 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:30.245372    5541 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:34:30.249840    5541 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:30.252946    5541 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:30.252974    5541 cni.go:84] Creating CNI manager for ""
	I1025 14:34:30.252986    5541 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:30.252998    5541 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:34:30.253005    5541 start_flags.go:323] config:
	{Name:default-k8s-diff-port-040000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-040000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:30.257544    5541 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:30.261897    5541 out.go:177] * Starting control plane node default-k8s-diff-port-040000 in cluster default-k8s-diff-port-040000
	I1025 14:34:30.265860    5541 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:30.265873    5541 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:30.265882    5541 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:30.265935    5541 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:30.265941    5541 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:30.265998    5541 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/default-k8s-diff-port-040000/config.json ...
	I1025 14:34:30.266008    5541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/default-k8s-diff-port-040000/config.json: {Name:mkab48f19da7fcac5463dab61cfd842ff7db5775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:30.266219    5541 start.go:365] acquiring machines lock for default-k8s-diff-port-040000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:30.266249    5541 start.go:369] acquired machines lock for "default-k8s-diff-port-040000" in 23.125µs
	I1025 14:34:30.266259    5541 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-040000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:30.266286    5541 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:30.274865    5541 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:30.291321    5541 start.go:159] libmachine.API.Create for "default-k8s-diff-port-040000" (driver="qemu2")
	I1025 14:34:30.291349    5541 client.go:168] LocalClient.Create starting
	I1025 14:34:30.291405    5541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:30.291433    5541 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:30.291443    5541 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:30.291480    5541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:30.291497    5541 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:30.291503    5541 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:30.291806    5541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:30.416999    5541 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:30.534865    5541 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:30.534870    5541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:30.535028    5541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:30.547262    5541 main.go:141] libmachine: STDOUT: 
	I1025 14:34:30.547280    5541 main.go:141] libmachine: STDERR: 
	I1025 14:34:30.547341    5541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2 +20000M
	I1025 14:34:30.557926    5541 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:30.557940    5541 main.go:141] libmachine: STDERR: 
	I1025 14:34:30.557958    5541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:30.557965    5541 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:30.557994    5541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:3f:33:e9:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:30.559663    5541 main.go:141] libmachine: STDOUT: 
	I1025 14:34:30.559674    5541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:30.559692    5541 client.go:171] LocalClient.Create took 268.340458ms
	I1025 14:34:32.561887    5541 start.go:128] duration metric: createHost completed in 2.295601417s
	I1025 14:34:32.561966    5541 start.go:83] releasing machines lock for "default-k8s-diff-port-040000", held for 2.295734875s
	W1025 14:34:32.562022    5541 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:32.575068    5541 out.go:177] * Deleting "default-k8s-diff-port-040000" in qemu2 ...
	W1025 14:34:32.598137    5541 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:32.598173    5541 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:37.600360    5541 start.go:365] acquiring machines lock for default-k8s-diff-port-040000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:37.600853    5541 start.go:369] acquired machines lock for "default-k8s-diff-port-040000" in 333.708µs
	I1025 14:34:37.601026    5541 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-040000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:37.601271    5541 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:37.610778    5541 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:37.661642    5541 start.go:159] libmachine.API.Create for "default-k8s-diff-port-040000" (driver="qemu2")
	I1025 14:34:37.661695    5541 client.go:168] LocalClient.Create starting
	I1025 14:34:37.661805    5541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:37.661863    5541 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:37.661881    5541 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:37.661960    5541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:37.661990    5541 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:37.662008    5541 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:37.662560    5541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:37.799273    5541 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:37.980673    5541 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:37.980684    5541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:37.980868    5541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:37.993176    5541 main.go:141] libmachine: STDOUT: 
	I1025 14:34:37.993194    5541 main.go:141] libmachine: STDERR: 
	I1025 14:34:37.993261    5541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2 +20000M
	I1025 14:34:38.003727    5541 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:38.003742    5541 main.go:141] libmachine: STDERR: 
	I1025 14:34:38.003791    5541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:38.003805    5541 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:38.003845    5541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:88:4f:df:d4:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:38.005562    5541 main.go:141] libmachine: STDOUT: 
	I1025 14:34:38.005576    5541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:38.005589    5541 client.go:171] LocalClient.Create took 343.893583ms
	I1025 14:34:40.007832    5541 start.go:128] duration metric: createHost completed in 2.406552125s
	I1025 14:34:40.007912    5541 start.go:83] releasing machines lock for "default-k8s-diff-port-040000", held for 2.407036084s
	W1025 14:34:40.008454    5541 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:40.017908    5541 out.go:177] 
	W1025 14:34:40.023140    5541 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:40.023247    5541 out.go:239] * 
	* 
	W1025 14:34:40.025930    5541 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:40.035066    5541 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (67.781667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-202000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-202000 create -f testdata/busybox.yaml: exit status 1 (28.415334ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-202000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.423958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.822959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-202000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-202000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-202000 describe deploy/metrics-server -n kube-system: exit status 1 (25.43925ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-202000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-202000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.403625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (5.18263775s)

                                                
                                                
-- stdout --
	* [embed-certs-202000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-202000 in cluster embed-certs-202000
	* Restarting existing qemu2 VM for "embed-certs-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:35.376134    5573 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:35.376294    5573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:35.376297    5573 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:35.376300    5573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:35.376418    5573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:35.377359    5573 out.go:303] Setting JSON to false
	I1025 14:34:35.393276    5573 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2049,"bootTime":1698267626,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:35.393369    5573 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:35.397230    5573 out.go:177] * [embed-certs-202000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:35.404240    5573 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:35.404300    5573 notify.go:220] Checking for updates...
	I1025 14:34:35.412208    5573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:35.415295    5573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:35.418247    5573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:35.421233    5573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:35.424245    5573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:35.427493    5573 config.go:182] Loaded profile config "embed-certs-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:35.427738    5573 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:35.432314    5573 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:34:35.438188    5573 start.go:298] selected driver: qemu2
	I1025 14:34:35.438197    5573 start.go:902] validating driver "qemu2" against &{Name:embed-certs-202000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-202000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:35.438265    5573 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:35.440536    5573 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:35.440559    5573 cni.go:84] Creating CNI manager for ""
	I1025 14:34:35.440566    5573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:35.440570    5573 start_flags.go:323] config:
	{Name:embed-certs-202000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-202000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:35.444889    5573 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:35.453275    5573 out.go:177] * Starting control plane node embed-certs-202000 in cluster embed-certs-202000
	I1025 14:34:35.456207    5573 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:35.456221    5573 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:35.456229    5573 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:35.456282    5573 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:35.456288    5573 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:35.456340    5573 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/embed-certs-202000/config.json ...
	I1025 14:34:35.456765    5573 start.go:365] acquiring machines lock for embed-certs-202000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:35.456799    5573 start.go:369] acquired machines lock for "embed-certs-202000" in 27.958µs
	I1025 14:34:35.456808    5573 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:35.456813    5573 fix.go:54] fixHost starting: 
	I1025 14:34:35.456933    5573 fix.go:102] recreateIfNeeded on embed-certs-202000: state=Stopped err=<nil>
	W1025 14:34:35.456941    5573 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:35.465220    5573 out.go:177] * Restarting existing qemu2 VM for "embed-certs-202000" ...
	I1025 14:34:35.468235    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:7f:4f:4c:18:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:35.470322    5573 main.go:141] libmachine: STDOUT: 
	I1025 14:34:35.470339    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:35.470372    5573 fix.go:56] fixHost completed within 13.55925ms
	I1025 14:34:35.470377    5573 start.go:83] releasing machines lock for "embed-certs-202000", held for 13.573708ms
	W1025 14:34:35.470382    5573 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:35.470414    5573 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:35.470418    5573 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:40.472406    5573 start.go:365] acquiring machines lock for embed-certs-202000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:40.472473    5573 start.go:369] acquired machines lock for "embed-certs-202000" in 41.959µs
	I1025 14:34:40.472484    5573 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:40.472489    5573 fix.go:54] fixHost starting: 
	I1025 14:34:40.472612    5573 fix.go:102] recreateIfNeeded on embed-certs-202000: state=Stopped err=<nil>
	W1025 14:34:40.472623    5573 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:40.485951    5573 out.go:177] * Restarting existing qemu2 VM for "embed-certs-202000" ...
	I1025 14:34:40.493026    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:7f:4f:4c:18:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/embed-certs-202000/disk.qcow2
	I1025 14:34:40.495205    5573 main.go:141] libmachine: STDOUT: 
	I1025 14:34:40.495220    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:40.495239    5573 fix.go:56] fixHost completed within 22.751042ms
	I1025 14:34:40.495244    5573 start.go:83] releasing machines lock for "embed-certs-202000", held for 22.766375ms
	W1025 14:34:40.495284    5573 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:40.501999    5573 out.go:177] 
	W1025 14:34:40.505930    5573 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:40.505936    5573 out.go:239] * 
	* 
	W1025 14:34:40.506462    5573 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:40.517939    5573 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-202000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (33.759416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-040000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-040000 create -f testdata/busybox.yaml: exit status 1 (27.873709ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-040000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.60625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.395875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-040000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-040000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-040000 describe deploy/metrics-server -n kube-system: exit status 1 (25.699041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-040000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-040000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.874625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (5.209895292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-040000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-040000 in cluster default-k8s-diff-port-040000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:40.542161    5608 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:40.542311    5608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:40.542314    5608 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:40.542316    5608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:40.542446    5608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:40.544019    5608 out.go:303] Setting JSON to false
	I1025 14:34:40.563292    5608 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2054,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:40.563392    5608 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:40.567951    5608 out.go:177] * [default-k8s-diff-port-040000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:40.578950    5608 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:40.575088    5608 notify.go:220] Checking for updates...
	I1025 14:34:40.586948    5608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:40.589974    5608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:40.591181    5608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:40.593998    5608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:40.600363    5608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:40.605209    5608 config.go:182] Loaded profile config "default-k8s-diff-port-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:40.605454    5608 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:40.609959    5608 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:34:40.617892    5608 start.go:298] selected driver: qemu2
	I1025 14:34:40.617898    5608 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-040000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:40.617952    5608 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:40.620336    5608 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 14:34:40.620366    5608 cni.go:84] Creating CNI manager for ""
	I1025 14:34:40.620372    5608 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:40.620378    5608 start_flags.go:323] config:
	{Name:default-k8s-diff-port-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-0400
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:40.624757    5608 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:40.628962    5608 out.go:177] * Starting control plane node default-k8s-diff-port-040000 in cluster default-k8s-diff-port-040000
	I1025 14:34:40.636968    5608 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:40.636990    5608 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:40.636997    5608 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:40.637057    5608 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:40.637069    5608 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:40.637136    5608 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/default-k8s-diff-port-040000/config.json ...
	I1025 14:34:40.637548    5608 start.go:365] acquiring machines lock for default-k8s-diff-port-040000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:40.637576    5608 start.go:369] acquired machines lock for "default-k8s-diff-port-040000" in 20.083µs
	I1025 14:34:40.637585    5608 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:40.637589    5608 fix.go:54] fixHost starting: 
	I1025 14:34:40.637702    5608 fix.go:102] recreateIfNeeded on default-k8s-diff-port-040000: state=Stopped err=<nil>
	W1025 14:34:40.637710    5608 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:40.641919    5608 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-040000" ...
	I1025 14:34:40.647508    5608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:88:4f:df:d4:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:40.650007    5608 main.go:141] libmachine: STDOUT: 
	I1025 14:34:40.650036    5608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:40.650077    5608 fix.go:56] fixHost completed within 12.487875ms
	I1025 14:34:40.650083    5608 start.go:83] releasing machines lock for "default-k8s-diff-port-040000", held for 12.502375ms
	W1025 14:34:40.650092    5608 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:40.650142    5608 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:40.650146    5608 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:45.652399    5608 start.go:365] acquiring machines lock for default-k8s-diff-port-040000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:45.652792    5608 start.go:369] acquired machines lock for "default-k8s-diff-port-040000" in 280.041µs
	I1025 14:34:45.652902    5608 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:45.652928    5608 fix.go:54] fixHost starting: 
	I1025 14:34:45.653666    5608 fix.go:102] recreateIfNeeded on default-k8s-diff-port-040000: state=Stopped err=<nil>
	W1025 14:34:45.653693    5608 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:45.668425    5608 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-040000" ...
	I1025 14:34:45.673353    5608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:88:4f:df:d4:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/default-k8s-diff-port-040000/disk.qcow2
	I1025 14:34:45.682800    5608 main.go:141] libmachine: STDOUT: 
	I1025 14:34:45.682850    5608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:45.682949    5608 fix.go:56] fixHost completed within 30.023167ms
	I1025 14:34:45.682970    5608 start.go:83] releasing machines lock for "default-k8s-diff-port-040000", held for 30.15575ms
	W1025 14:34:45.683132    5608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:45.691276    5608 out.go:177] 
	W1025 14:34:45.694215    5608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:45.694255    5608 out.go:239] * 
	* 
	W1025 14:34:45.696213    5608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:45.706174    5608 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-040000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (65.973042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-202000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (37.3475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-202000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.440666ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-202000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-202000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.223125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-202000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-202000 "sudo crictl images -o json": exit status 89 (42.653125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-202000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-202000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-202000"
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (30.467625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-202000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-202000 --alsologtostderr -v=1: exit status 89 (42.078458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-202000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:40.762243    5626 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:40.762425    5626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:40.762428    5626 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:40.762431    5626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:40.762569    5626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:40.762767    5626 out.go:303] Setting JSON to false
	I1025 14:34:40.762778    5626 mustload.go:65] Loading cluster: embed-certs-202000
	I1025 14:34:40.762978    5626 config.go:182] Loaded profile config "embed-certs-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:40.767000    5626 out.go:177] * The control plane node must be running for this command
	I1025 14:34:40.771066    5626 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-202000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-202000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.067708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (31.315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (9.766872791s)

                                                
                                                
-- stdout --
	* [newest-cni-155000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-155000 in cluster newest-cni-155000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:41.247257    5649 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:41.247395    5649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:41.247398    5649 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:41.247402    5649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:41.247533    5649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:41.248586    5649 out.go:303] Setting JSON to false
	I1025 14:34:41.264577    5649 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2055,"bootTime":1698267626,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:41.264650    5649 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:41.269836    5649 out.go:177] * [newest-cni-155000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:41.276761    5649 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:41.280820    5649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:41.276814    5649 notify.go:220] Checking for updates...
	I1025 14:34:41.286825    5649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:41.289866    5649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:41.292807    5649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:41.295837    5649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:41.299110    5649 config.go:182] Loaded profile config "default-k8s-diff-port-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:41.299175    5649 config.go:182] Loaded profile config "multinode-418000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:41.299220    5649 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:41.302808    5649 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 14:34:41.309793    5649 start.go:298] selected driver: qemu2
	I1025 14:34:41.309801    5649 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:34:41.309807    5649 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:41.312143    5649 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1025 14:34:41.312166    5649 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 14:34:41.315850    5649 out.go:177] * Automatically selected the socket_vmnet network
	I1025 14:34:41.322911    5649 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 14:34:41.322945    5649 cni.go:84] Creating CNI manager for ""
	I1025 14:34:41.322953    5649 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:41.322957    5649 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 14:34:41.322963    5649 start_flags.go:323] config:
	{Name:newest-cni-155000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:41.327554    5649 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:41.330817    5649 out.go:177] * Starting control plane node newest-cni-155000 in cluster newest-cni-155000
	I1025 14:34:41.338873    5649 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:41.338888    5649 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:41.338897    5649 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:41.338978    5649 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:41.338992    5649 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:41.339061    5649 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/newest-cni-155000/config.json ...
	I1025 14:34:41.339075    5649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/newest-cni-155000/config.json: {Name:mk6b5ca3b9fe56f2fe75d692d81bc8b6fad9dd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:34:41.339282    5649 start.go:365] acquiring machines lock for newest-cni-155000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:41.339313    5649 start.go:369] acquired machines lock for "newest-cni-155000" in 25.583µs
	I1025 14:34:41.339324    5649 start.go:93] Provisioning new machine with config: &{Name:newest-cni-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:41.339352    5649 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:41.347815    5649 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:41.366130    5649 start.go:159] libmachine.API.Create for "newest-cni-155000" (driver="qemu2")
	I1025 14:34:41.366159    5649 client.go:168] LocalClient.Create starting
	I1025 14:34:41.366220    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:41.366246    5649 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:41.366256    5649 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:41.366294    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:41.366313    5649 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:41.366321    5649 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:41.366668    5649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:41.500229    5649 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:41.540985    5649 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:41.540991    5649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:41.541164    5649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:41.553460    5649 main.go:141] libmachine: STDOUT: 
	I1025 14:34:41.553476    5649 main.go:141] libmachine: STDERR: 
	I1025 14:34:41.553529    5649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2 +20000M
	I1025 14:34:41.563974    5649 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:41.563990    5649 main.go:141] libmachine: STDERR: 
	I1025 14:34:41.564015    5649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:41.564024    5649 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:41.564058    5649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:3f:b5:d0:4c:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:41.565803    5649 main.go:141] libmachine: STDOUT: 
	I1025 14:34:41.565817    5649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:41.565837    5649 client.go:171] LocalClient.Create took 199.673792ms
	I1025 14:34:43.568005    5649 start.go:128] duration metric: createHost completed in 2.228656625s
	I1025 14:34:43.568065    5649 start.go:83] releasing machines lock for "newest-cni-155000", held for 2.228769916s
	W1025 14:34:43.568149    5649 start.go:691] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:43.578823    5649 out.go:177] * Deleting "newest-cni-155000" in qemu2 ...
	W1025 14:34:43.602683    5649 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:43.602709    5649 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:48.604896    5649 start.go:365] acquiring machines lock for newest-cni-155000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:48.605313    5649 start.go:369] acquired machines lock for "newest-cni-155000" in 313.708µs
	I1025 14:34:48.605471    5649 start.go:93] Provisioning new machine with config: &{Name:newest-cni-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 14:34:48.605669    5649 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 14:34:48.614592    5649 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 14:34:48.663509    5649 start.go:159] libmachine.API.Create for "newest-cni-155000" (driver="qemu2")
	I1025 14:34:48.663554    5649 client.go:168] LocalClient.Create starting
	I1025 14:34:48.663656    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/ca.pem
	I1025 14:34:48.663724    5649 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:48.663750    5649 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:48.663816    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-1304/.minikube/certs/cert.pem
	I1025 14:34:48.663850    5649 main.go:141] libmachine: Decoding PEM data...
	I1025 14:34:48.663865    5649 main.go:141] libmachine: Parsing certificate...
	I1025 14:34:48.664439    5649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso...
	I1025 14:34:48.798687    5649 main.go:141] libmachine: Creating SSH key...
	I1025 14:34:48.921688    5649 main.go:141] libmachine: Creating Disk image...
	I1025 14:34:48.921694    5649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 14:34:48.921860    5649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:48.934118    5649 main.go:141] libmachine: STDOUT: 
	I1025 14:34:48.934133    5649 main.go:141] libmachine: STDERR: 
	I1025 14:34:48.934188    5649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2 +20000M
	I1025 14:34:48.944629    5649 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 14:34:48.944642    5649 main.go:141] libmachine: STDERR: 
	I1025 14:34:48.944655    5649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:48.944662    5649 main.go:141] libmachine: Starting QEMU VM...
	I1025 14:34:48.944694    5649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:f9:ea:c8:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:48.946471    5649 main.go:141] libmachine: STDOUT: 
	I1025 14:34:48.946485    5649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:48.946495    5649 client.go:171] LocalClient.Create took 282.938375ms
	I1025 14:34:50.948648    5649 start.go:128] duration metric: createHost completed in 2.342954833s
	I1025 14:34:50.948765    5649 start.go:83] releasing machines lock for "newest-cni-155000", held for 2.343413709s
	W1025 14:34:50.949181    5649 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:50.959695    5649 out.go:177] 
	W1025 14:34:50.964466    5649 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:50.964490    5649 out.go:239] * 
	* 
	W1025 14:34:50.965790    5649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:50.975831    5649 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (58.332292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-040000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (33.749792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-040000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-040000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-040000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.961458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-040000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-040000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (32.090458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-040000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-040000 "sudo crictl images -o json": exit status 89 (42.942959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-040000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-040000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-040000"
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.873375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-040000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-040000 --alsologtostderr -v=1: exit status 89 (42.776208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-040000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:45.987793    5673 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:45.987970    5673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:45.987973    5673 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:45.987975    5673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:45.988111    5673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:45.988341    5673 out.go:303] Setting JSON to false
	I1025 14:34:45.988350    5673 mustload.go:65] Loading cluster: default-k8s-diff-port-040000
	I1025 14:34:45.988537    5673 config.go:182] Loaded profile config "default-k8s-diff-port-040000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:45.992159    5673 out.go:177] * The control plane node must be running for this command
	I1025 14:34:45.996289    5673 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-040000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-040000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.555542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (30.660125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3: exit status 80 (5.184933833s)

                                                
                                                
-- stdout --
	* [newest-cni-155000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-155000 in cluster newest-cni-155000
	* Restarting existing qemu2 VM for "newest-cni-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:51.303533    5710 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:51.303687    5710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:51.303690    5710 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:51.303697    5710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:51.303850    5710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:51.304826    5710 out.go:303] Setting JSON to false
	I1025 14:34:51.320920    5710 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2065,"bootTime":1698267626,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:34:51.321004    5710 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:34:51.325621    5710 out.go:177] * [newest-cni-155000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:34:51.332575    5710 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:34:51.332621    5710 notify.go:220] Checking for updates...
	I1025 14:34:51.336399    5710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:34:51.340618    5710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:34:51.343581    5710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:34:51.345071    5710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:34:51.347665    5710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:34:51.350844    5710 config.go:182] Loaded profile config "newest-cni-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:51.351090    5710 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:34:51.355409    5710 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:34:51.362538    5710 start.go:298] selected driver: qemu2
	I1025 14:34:51.362545    5710 start.go:902] validating driver "qemu2" against &{Name:newest-cni-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:51.362614    5710 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:34:51.364776    5710 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 14:34:51.364801    5710 cni.go:84] Creating CNI manager for ""
	I1025 14:34:51.364810    5710 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:34:51.364816    5710 start_flags.go:323] config:
	{Name:newest-cni-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-155000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:34:51.369151    5710 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:34:51.377519    5710 out.go:177] * Starting control plane node newest-cni-155000 in cluster newest-cni-155000
	I1025 14:34:51.381630    5710 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:34:51.381657    5710 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:34:51.381670    5710 cache.go:56] Caching tarball of preloaded images
	I1025 14:34:51.381726    5710 preload.go:174] Found /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 14:34:51.381732    5710 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 14:34:51.381798    5710 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/newest-cni-155000/config.json ...
	I1025 14:34:51.382158    5710 start.go:365] acquiring machines lock for newest-cni-155000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:51.382182    5710 start.go:369] acquired machines lock for "newest-cni-155000" in 17.875µs
	I1025 14:34:51.382190    5710 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:51.382196    5710 fix.go:54] fixHost starting: 
	I1025 14:34:51.382301    5710 fix.go:102] recreateIfNeeded on newest-cni-155000: state=Stopped err=<nil>
	W1025 14:34:51.382307    5710 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:51.386535    5710 out.go:177] * Restarting existing qemu2 VM for "newest-cni-155000" ...
	I1025 14:34:51.394499    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:f9:ea:c8:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:51.396422    5710 main.go:141] libmachine: STDOUT: 
	I1025 14:34:51.396439    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:51.396471    5710 fix.go:56] fixHost completed within 14.277459ms
	I1025 14:34:51.396475    5710 start.go:83] releasing machines lock for "newest-cni-155000", held for 14.289833ms
	W1025 14:34:51.396480    5710 start.go:691] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:51.396515    5710 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:51.396519    5710 start.go:706] Will try again in 5 seconds ...
	I1025 14:34:56.398686    5710 start.go:365] acquiring machines lock for newest-cni-155000: {Name:mk4729a58435ced26e7327334c821feeba35a3ef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 14:34:56.399034    5710 start.go:369] acquired machines lock for "newest-cni-155000" in 269.167µs
	I1025 14:34:56.399142    5710 start.go:96] Skipping create...Using existing machine configuration
	I1025 14:34:56.399166    5710 fix.go:54] fixHost starting: 
	I1025 14:34:56.399986    5710 fix.go:102] recreateIfNeeded on newest-cni-155000: state=Stopped err=<nil>
	W1025 14:34:56.400036    5710 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 14:34:56.405675    5710 out.go:177] * Restarting existing qemu2 VM for "newest-cni-155000" ...
	I1025 14:34:56.413791    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:77:f9:ea:c8:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17488-1304/.minikube/machines/newest-cni-155000/disk.qcow2
	I1025 14:34:56.423220    5710 main.go:141] libmachine: STDOUT: 
	I1025 14:34:56.423276    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 14:34:56.423380    5710 fix.go:56] fixHost completed within 24.211458ms
	I1025 14:34:56.423399    5710 start.go:83] releasing machines lock for "newest-cni-155000", held for 24.338791ms
	W1025 14:34:56.423597    5710 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 14:34:56.431674    5710 out.go:177] 
	W1025 14:34:56.434649    5710 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 14:34:56.434682    5710 out.go:239] * 
	* 
	W1025 14:34:56.437245    5710 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:34:56.444635    5710 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-155000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (70.990042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-155000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-155000 "sudo crictl images -o json": exit status 89 (45.6335ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-155000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-155000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-155000"
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (32.433083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-155000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-155000 --alsologtostderr -v=1: exit status 89 (44.09525ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-155000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:34:56.639587    5727 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:34:56.639749    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:56.639752    5727 out.go:309] Setting ErrFile to fd 2...
	I1025 14:34:56.639754    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:34:56.639901    5727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:34:56.640118    5727 out.go:303] Setting JSON to false
	I1025 14:34:56.640127    5727 mustload.go:65] Loading cluster: newest-cni-155000
	I1025 14:34:56.640337    5727 config.go:182] Loaded profile config "newest-cni-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:34:56.644174    5727 out.go:177] * The control plane node must be running for this command
	I1025 14:34:56.648234    5727 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-155000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-155000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (32.483917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (32.702875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (150/259)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.11
10 TestDownloadOnly/v1.28.3/json-events 6.99
11 TestDownloadOnly/v1.28.3/preload-exists 0
14 TestDownloadOnly/v1.28.3/kubectl 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.36
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 124.77
27 TestAddons/parallel/Registry 14.16
29 TestAddons/parallel/InspektorGadget 10.23
30 TestAddons/parallel/MetricsServer 5.25
33 TestAddons/parallel/CSI 53.83
34 TestAddons/parallel/Headlamp 11.44
35 TestAddons/parallel/CloudSpanner 5.22
36 TestAddons/parallel/LocalPath 8.66
37 TestAddons/parallel/NvidiaDevicePlugin 5.16
40 TestAddons/serial/GCPAuth/Namespaces 0.07
41 TestAddons/StoppedEnableDisable 12.28
49 TestHyperKitDriverInstallOrUpdate 7.96
53 TestErrorSpam/start 0.37
54 TestErrorSpam/status 0.23
55 TestErrorSpam/pause 5.26
56 TestErrorSpam/unpause 5.54
57 TestErrorSpam/stop 111.41
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 47.49
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 35.15
64 TestFunctional/serial/KubeContext 0.03
65 TestFunctional/serial/KubectlGetPods 0.04
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.72
69 TestFunctional/serial/CacheCmd/cache/add_local 1.35
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
71 TestFunctional/serial/CacheCmd/cache/list 0.04
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
74 TestFunctional/serial/CacheCmd/cache/delete 0.08
75 TestFunctional/serial/MinikubeKubectlCmd 0.46
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.58
77 TestFunctional/serial/ExtraConfig 36.25
78 TestFunctional/serial/ComponentHealth 0.04
79 TestFunctional/serial/LogsCmd 0.66
80 TestFunctional/serial/LogsFileCmd 0.68
81 TestFunctional/serial/InvalidService 4.43
83 TestFunctional/parallel/ConfigCmd 0.24
84 TestFunctional/parallel/DashboardCmd 9.56
85 TestFunctional/parallel/DryRun 0.24
86 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/StatusCmd 0.26
92 TestFunctional/parallel/AddonsCmd 0.14
93 TestFunctional/parallel/PersistentVolumeClaim 24.31
95 TestFunctional/parallel/SSHCmd 0.14
96 TestFunctional/parallel/CpCmd 0.3
98 TestFunctional/parallel/FileSync 0.08
99 TestFunctional/parallel/CertSync 0.44
103 TestFunctional/parallel/NodeLabels 0.04
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
107 TestFunctional/parallel/License 0.2
108 TestFunctional/parallel/Version/short 0.04
109 TestFunctional/parallel/Version/components 0.24
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
114 TestFunctional/parallel/ImageCommands/ImageBuild 1.54
115 TestFunctional/parallel/ImageCommands/Setup 1.64
116 TestFunctional/parallel/DockerEnv/bash 0.43
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.12
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.16
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.52
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.61
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.74
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.12
133 TestFunctional/parallel/ServiceCmd/List 0.09
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
136 TestFunctional/parallel/ServiceCmd/Format 0.11
137 TestFunctional/parallel/ServiceCmd/URL 0.11
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.06
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
145 TestFunctional/parallel/ProfileCmd/profile_list 0.16
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
147 TestFunctional/parallel/MountCmd/any-port 5.31
148 TestFunctional/parallel/MountCmd/specific-port 1.16
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
150 TestFunctional/delete_addon-resizer_images 0.11
151 TestFunctional/delete_my-image_image 0.04
152 TestFunctional/delete_minikube_cached_images 0.04
156 TestImageBuild/serial/Setup 31.83
157 TestImageBuild/serial/NormalBuild 1.03
159 TestImageBuild/serial/BuildWithDockerIgnore 0.13
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
163 TestIngressAddonLegacy/StartLegacyK8sCluster 70.71
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 20.85
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.23
170 TestJSONOutput/start/Command 46.19
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.28
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.22
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 12.08
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.35
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 65.37
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
260 TestNoKubernetes/serial/ProfileList 0.15
261 TestNoKubernetes/serial/Stop 0.06
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStartStop/group/old-k8s-version/serial/Stop 0.06
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
286 TestStartStop/group/no-preload/serial/Stop 0.06
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
303 TestStartStop/group/embed-certs/serial/Stop 0.06
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
323 TestStartStop/group/newest-cni/serial/Stop 0.06
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-774000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-774000: exit status 85 (107.413417ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |          |
	|         | -p download-only-774000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:10:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:10:09.184746    1725 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:10:09.184944    1725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:09.184947    1725 out.go:309] Setting ErrFile to fd 2...
	I1025 14:10:09.184949    1725 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:09.185073    1725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	W1025 14:10:09.185167    1725 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: no such file or directory
	I1025 14:10:09.186290    1725 out.go:303] Setting JSON to true
	I1025 14:10:09.204424    1725 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":583,"bootTime":1698267626,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:10:09.204503    1725 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:10:09.213372    1725 out.go:97] [download-only-774000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:10:09.217367    1725 out.go:169] MINIKUBE_LOCATION=17488
	W1025 14:10:09.213498    1725 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 14:10:09.213524    1725 notify.go:220] Checking for updates...
	I1025 14:10:09.227373    1725 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:10:09.235313    1725 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:10:09.242382    1725 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:10:09.249386    1725 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	W1025 14:10:09.257275    1725 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 14:10:09.257481    1725 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:10:09.262374    1725 out.go:97] Using the qemu2 driver based on user configuration
	I1025 14:10:09.262381    1725 start.go:298] selected driver: qemu2
	I1025 14:10:09.262395    1725 start.go:902] validating driver "qemu2" against <nil>
	I1025 14:10:09.262453    1725 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 14:10:09.266287    1725 out.go:169] Automatically selected the socket_vmnet network
	I1025 14:10:09.273752    1725 start_flags.go:386] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 14:10:09.273838    1725 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 14:10:09.273937    1725 cni.go:84] Creating CNI manager for ""
	I1025 14:10:09.273957    1725 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 14:10:09.273966    1725 start_flags.go:323] config:
	{Name:download-only-774000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:09.280796    1725 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:10:09.285381    1725 out.go:97] Downloading VM boot image ...
	I1025 14:10:09.285396    1725 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/iso/arm64/minikube-v1.31.0-1697471113-17434-arm64.iso
	I1025 14:10:14.171529    1725 out.go:97] Starting control plane node download-only-774000 in cluster download-only-774000
	I1025 14:10:14.171567    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:14.228275    1725 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:10:14.228290    1725 cache.go:56] Caching tarball of preloaded images
	I1025 14:10:14.228471    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:14.233081    1725 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 14:10:14.233088    1725 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:14.309723    1725 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1025 14:10:22.168337    1725 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:22.168463    1725 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:22.810927    1725 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 14:10:22.811130    1725 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/download-only-774000/config.json ...
	I1025 14:10:22.811147    1725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/download-only-774000/config.json: {Name:mkb88c3470620066988bab56fb499300b62e0198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 14:10:22.811357    1725 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 14:10:22.811518    1725 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1025 14:10:23.555598    1725 out.go:169] 
	W1025 14:10:23.560697    1725 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17488-1304/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0 0x1046145a0] Decompressors:map[bz2:0x14000680000 gz:0x14000680008 tar:0x1400000ffb0 tar.bz2:0x1400000ffc0 tar.gz:0x1400000ffd0 tar.xz:0x1400000ffe0 tar.zst:0x1400000fff0 tbz2:0x1400000ffc0 tgz:0x1400000ffd0 txz:0x1400000ffe0 tzst:0x1400000fff0 xz:0x14000680010 zip:0x14000680020 zst:0x14000680018] Getters:map[file:0x140004a5650 http:0x1400051e140 https:0x1400051e190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 14:10:23.560726    1725 out_reason.go:110] 
	W1025 14:10:23.567550    1725 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 14:10:23.571584    1725 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-774000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (6.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-774000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-774000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=qemu2 : (6.992184209s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (6.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
--- PASS: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-774000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-774000: exit status 85 (79.124875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |          |
	|         | -p download-only-774000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-774000 | jenkins | v1.31.2 | 25 Oct 23 14:10 PDT |          |
	|         | -p download-only-774000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 14:10:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 14:10:23.784488    1741 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:10:23.784633    1741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:23.784636    1741 out.go:309] Setting ErrFile to fd 2...
	I1025 14:10:23.784640    1741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:10:23.784783    1741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	W1025 14:10:23.784845    1741 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-1304/.minikube/config/config.json: no such file or directory
	I1025 14:10:23.785775    1741 out.go:303] Setting JSON to true
	I1025 14:10:23.801766    1741 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":597,"bootTime":1698267626,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:10:23.801838    1741 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:10:23.806428    1741 out.go:97] [download-only-774000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:10:23.810323    1741 out.go:169] MINIKUBE_LOCATION=17488
	I1025 14:10:23.806537    1741 notify.go:220] Checking for updates...
	I1025 14:10:23.817362    1741 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:10:23.820320    1741 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:10:23.823352    1741 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:10:23.826422    1741 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	W1025 14:10:23.832314    1741 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 14:10:23.832669    1741 config.go:182] Loaded profile config "download-only-774000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 14:10:23.832720    1741 start.go:810] api.Load failed for download-only-774000: filestore "download-only-774000": Docker machine "download-only-774000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 14:10:23.832774    1741 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 14:10:23.832794    1741 start.go:810] api.Load failed for download-only-774000: filestore "download-only-774000": Docker machine "download-only-774000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 14:10:23.836295    1741 out.go:97] Using the qemu2 driver based on existing profile
	I1025 14:10:23.836303    1741 start.go:298] selected driver: qemu2
	I1025 14:10:23.836307    1741 start.go:902] validating driver "qemu2" against &{Name:download-only-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-774000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:23.838658    1741 cni.go:84] Creating CNI manager for ""
	I1025 14:10:23.838675    1741 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 14:10:23.838685    1741 start_flags.go:323] config:
	{Name:download-only-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-774000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:10:23.843250    1741 iso.go:125] acquiring lock: {Name:mk657ebb287a480226712599043bae2b04e046a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 14:10:23.846270    1741 out.go:97] Starting control plane node download-only-774000 in cluster download-only-774000
	I1025 14:10:23.846277    1741 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:10:23.903298    1741 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	I1025 14:10:23.903315    1741 cache.go:56] Caching tarball of preloaded images
	I1025 14:10:23.903464    1741 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 14:10:23.908593    1741 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 14:10:23.908600    1741 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4 ...
	I1025 14:10:23.987255    1741 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4?checksum=md5:afa72052808cee1859e38c1ae6d1a426 -> /Users/jenkins/minikube-integration/17488-1304/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-774000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-774000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-272000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-272000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-355000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-355000: exit status 85 (59.759333ms)

                                                
                                                
-- stdout --
	* Profile "addons-355000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-355000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-355000: exit status 85 (63.494791ms)

                                                
                                                
-- stdout --
	* Profile "addons-355000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-355000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-355000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m4.765307541s)
--- PASS: TestAddons/Setup (124.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 7.015875ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8n8vt" [cae3a588-d76d-4af2-a97e-8e37b78c04b3] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008709292s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rxr5v" [1c04e71c-98ce-4108-9506-d0880f28a3e0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009490958s
addons_test.go:339: (dbg) Run:  kubectl --context addons-355000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-355000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-355000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.76326925s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 ip
2023/10/25 14:12:50 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-slc7b" [5d49cb38-ea36-400e-b3ae-d1b666a980f3] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007416542s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-355000
addons_test.go:840: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-355000: (5.220504s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 1.724334ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-25r7x" [85efce74-c581-409e-9d09-038930b453b7] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007117166s
addons_test.go:414: (dbg) Run:  kubectl --context addons-355000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 7.242166ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-355000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-355000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [81612a75-60e5-4b87-bd46-8d4a8fe5de3d] Pending
helpers_test.go:344: "task-pv-pod" [81612a75-60e5-4b87-bd46-8d4a8fe5de3d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [81612a75-60e5-4b87-bd46-8d4a8fe5de3d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.013243916s
addons_test.go:583: (dbg) Run:  kubectl --context addons-355000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-355000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-355000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-355000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-355000 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-355000 delete pod task-pv-pod: (1.156592792s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-355000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-355000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-355000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4d865b7a-5090-48c5-bf71-2ccdd4a07dfd] Pending
helpers_test.go:344: "task-pv-pod-restore" [4d865b7a-5090-48c5-bf71-2ccdd4a07dfd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4d865b7a-5090-48c5-bf71-2ccdd4a07dfd] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.012665791s
addons_test.go:625: (dbg) Run:  kubectl --context addons-355000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-355000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-355000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-arm64 -p addons-355000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.1053095s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-355000 --alsologtostderr -v=1
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-mxhl6" [1ce25b8f-8a7d-435c-bc4d-f5d599f42f47] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-mxhl6" [1ce25b8f-8a7d-435c-bc4d-f5d599f42f47] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006819375s
--- PASS: TestAddons/parallel/Headlamp (11.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-gczg8" [967f965c-e754-4613-910b-225e1e411c5a] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006736334s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-355000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-355000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-355000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5f70b29e-996e-4564-99dc-d4545ff4eb53] Pending
helpers_test.go:344: "test-local-path" [5f70b29e-996e-4564-99dc-d4545ff4eb53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5f70b29e-996e-4564-99dc-d4545ff4eb53] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5f70b29e-996e-4564-99dc-d4545ff4eb53] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005588708s
addons_test.go:890: (dbg) Run:  kubectl --context addons-355000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 ssh "cat /opt/local-path-provisioner/pvc-cc6901cf-d9fc-4f53-8183-180e9a68fcdf_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-355000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-355000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-arm64 -p addons-355000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wsrgl" [bd51af8f-ca72-4bf7-b1d8-6a39f9994c22] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0082985s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-355000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-355000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-355000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-355000
addons_test.go:171: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-355000: (12.085888958s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-355000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-355000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-355000
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.96s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status: exit status 6 (77.753459ms)

                                                
                                                
-- stdout --
	nospam-607000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 14:14:50.516603    2046 status.go:415] kubeconfig endpoint: extract IP: "nospam-607000" does not appear in /Users/jenkins/minikube-integration/17488-1304/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status: exit status 6 (74.656084ms)

                                                
                                                
-- stdout --
	nospam-607000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 14:14:50.591078    2048 status.go:415] kubeconfig endpoint: extract IP: "nospam-607000" does not appear in /Users/jenkins/minikube-integration/17488-1304/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status: exit status 6 (76.132666ms)

                                                
                                                
-- stdout --
	nospam-607000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 14:14:50.667524    2050 status.go:415] kubeconfig endpoint: extract IP: "nospam-607000" does not appear in /Users/jenkins/minikube-integration/17488-1304/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause: exit status 80 (1.536407958s)

                                                
                                                
-- stdout --
	* Pausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause: exit status 80 (1.916426458s)

                                                
                                                
-- stdout --
	* Pausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause: exit status 80 (1.810452583s)

                                                
                                                
-- stdout --
	* Pausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause: exit status 80 (1.79039075s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause: exit status 80 (1.946748458s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause: exit status 80 (1.806351417s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-607000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.54s)

                                                
                                    
x
+
TestErrorSpam/stop (111.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 stop: (1m51.247104583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-607000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-607000 stop
--- PASS: TestErrorSpam/stop (111.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17488-1304/.minikube/files/etc/test/nested/copy/1723/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1025 14:17:36.789524    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:36.796278    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:36.808358    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:36.830454    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:36.872576    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:36.954704    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:37.116828    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:37.438925    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:38.081098    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:39.363232    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-260000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.494469375s)
--- PASS: TestFunctional/serial/StartWithProxy (47.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --alsologtostderr -v=8
E1025 14:17:41.925585    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:47.047936    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
E1025 14:17:57.290593    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-260000 --alsologtostderr -v=8: (35.146018792s)
functional_test.go:659: soft start took 35.146448541s for "functional-260000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-260000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:3.1: (1.307734667s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:3.3
E1025 14:18:17.772982    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:3.3: (1.19180625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 cache add registry.k8s.io/pause:latest: (1.219035208s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4222823063/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache add minikube-local-cache-test:functional-260000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache delete minikube-local-cache-test:functional-260000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-260000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (79.461708ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 kubectl -- --context functional-260000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-260000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 14:18:58.735242    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-260000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.250215s)
functional_test.go:757: restart took 36.250323834s for "functional-260000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-260000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd690535442/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-260000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-260000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-260000: exit status 115 (111.541542ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30480 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-260000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-260000 delete -f testdata/invalidsvc.yaml: (1.20874075s)
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 config get cpus: exit status 14 (37.081958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 config get cpus: exit status 14 (33.122625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-260000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-260000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2711: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-260000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (121.326958ms)

                                                
                                                
-- stdout --
	* [functional-260000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:19:52.487617    2694 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:19:52.487758    2694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.487761    2694 out.go:309] Setting ErrFile to fd 2...
	I1025 14:19:52.487763    2694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.487899    2694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:19:52.488905    2694 out.go:303] Setting JSON to false
	I1025 14:19:52.506685    2694 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1166,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:19:52.506758    2694 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:19:52.511578    2694 out.go:177] * [functional-260000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1025 14:19:52.519579    2694 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:19:52.523493    2694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:19:52.519667    2694 notify.go:220] Checking for updates...
	I1025 14:19:52.528640    2694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:19:52.531542    2694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:19:52.534570    2694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:19:52.537538    2694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:19:52.540741    2694 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:19:52.540979    2694 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:19:52.546510    2694 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 14:19:52.554492    2694 start.go:298] selected driver: qemu2
	I1025 14:19:52.554502    2694 start.go:902] validating driver "qemu2" against &{Name:functional-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:functional-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:19:52.554558    2694 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:19:52.560449    2694 out.go:177] 
	W1025 14:19:52.563547    2694 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 14:19:52.567532    2694 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-260000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-260000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.046583ms)

                                                
                                                
-- stdout --
	* [functional-260000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 14:19:52.719543    2705 out.go:296] Setting OutFile to fd 1 ...
	I1025 14:19:52.719688    2705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.719691    2705 out.go:309] Setting ErrFile to fd 2...
	I1025 14:19:52.719694    2705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 14:19:52.719820    2705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
	I1025 14:19:52.721213    2705 out.go:303] Setting JSON to false
	I1025 14:19:52.738326    2705 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1166,"bootTime":1698267626,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 14:19:52.738422    2705 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 14:19:52.742570    2705 out.go:177] * [functional-260000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	I1025 14:19:52.749530    2705 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 14:19:52.749589    2705 notify.go:220] Checking for updates...
	I1025 14:19:52.756560    2705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	I1025 14:19:52.759570    2705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 14:19:52.762526    2705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 14:19:52.765552    2705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	I1025 14:19:52.768510    2705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 14:19:52.771773    2705 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 14:19:52.772004    2705 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 14:19:52.776490    2705 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1025 14:19:52.783537    2705 start.go:298] selected driver: qemu2
	I1025 14:19:52.783545    2705 start.go:902] validating driver "qemu2" against &{Name:functional-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.3 ClusterName:functional-260000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 14:19:52.783605    2705 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 14:19:52.788517    2705 out.go:177] 
	W1025 14:19:52.792553    2705 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 14:19:52.795572    2705 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7d012704-7d32-47a8-9813-d1ba4c23ca2c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007776542s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-260000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-260000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-260000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-260000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b2f8a9c-f7e7-4a09-a5e2-df68193aca9e] Pending
helpers_test.go:344: "sp-pod" [7b2f8a9c-f7e7-4a09-a5e2-df68193aca9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7b2f8a9c-f7e7-4a09-a5e2-df68193aca9e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.009242458s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-260000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-260000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-260000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c82768bc-43a7-4f3d-8c1f-aa777f3342ce] Pending
helpers_test.go:344: "sp-pod" [c82768bc-43a7-4f3d-8c1f-aa777f3342ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c82768bc-43a7-4f3d-8c1f-aa777f3342ce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007786333s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-260000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh -n functional-260000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 cp functional-260000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd11475337/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh -n functional-260000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1723/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /etc/test/nested/copy/1723/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1723.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /etc/ssl/certs/1723.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1723.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /usr/share/ca-certificates/1723.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /etc/ssl/certs/17232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /usr/share/ca-certificates/17232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-260000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "sudo systemctl is-active crio": exit status 1 (141.305917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-260000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-260000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-260000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-260000 image ls --format short --alsologtostderr:
I1025 14:19:59.993628    2735 out.go:296] Setting OutFile to fd 1 ...
I1025 14:19:59.993988    2735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:19:59.993992    2735 out.go:309] Setting ErrFile to fd 2...
I1025 14:19:59.993995    2735 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:19:59.994145    2735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:19:59.994558    2735 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:19:59.994619    2735 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:19:59.995623    2735 ssh_runner.go:195] Run: systemctl --version
I1025 14:19:59.995631    2735 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/functional-260000/id_rsa Username:docker}
I1025 14:20:00.025277    2735 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-260000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 97930d6f4eecd | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 8276439b4f237 | 116MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/library/minikube-local-cache-test | functional-260000 | feddc8bd230f8 | 30B    |
| docker.io/library/nginx                     | alpine            | aae348c9fbd40 | 48.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 537e9a59ee2fd | 120MB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | a5dd5cdd6d3ef | 68.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-260000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-260000 | 84f62587177d3 | 1.41MB |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 42a4e73724daa | 57.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-260000 image ls --format table --alsologtostderr:
I1025 14:20:01.774433    2749 out.go:296] Setting OutFile to fd 1 ...
I1025 14:20:01.774581    2749 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:01.774584    2749 out.go:309] Setting ErrFile to fd 2...
I1025 14:20:01.774587    2749 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:01.774713    2749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:20:01.775109    2749 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:01.775172    2749 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:01.776109    2749 ssh_runner.go:195] Run: systemctl --version
I1025 14:20:01.776116    2749 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/functional-260000/id_rsa Username:docker}
I1025 14:20:01.807642    2749 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/10/25 14:20:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-260000 image ls --format json --alsologtostderr:
[{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"68300000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"feddc8bd230f8327fc1f76fe35187c4035ca4ee1c434e845f4988e98548c0602",
"repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-260000"],"size":"30"},{"id":"97930d6f4eecda673e2f3d7ec2983bce00b353792d1a9044b6477a3c51fcb185","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/
coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-260000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"84f62587177d350c14fceb5ca5662a78f70a949271502240824f59b85a59254f","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-260000"],"size":"1410000"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"s
ize":"120000000"},{"id":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"116000000"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"57800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-260000 image ls --format json --alsologtostderr:
I1025 14:20:01.698501    2747 out.go:296] Setting OutFile to fd 1 ...
I1025 14:20:01.698676    2747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:01.698679    2747 out.go:309] Setting ErrFile to fd 2...
I1025 14:20:01.698681    2747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:01.698803    2747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:20:01.699190    2747 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:01.699245    2747 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:01.700134    2747 ssh_runner.go:195] Run: systemctl --version
I1025 14:20:01.700141    2747 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/functional-260000/id_rsa Username:docker}
I1025 14:20:01.728733    2747 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-260000 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: feddc8bd230f8327fc1f76fe35187c4035ca4ee1c434e845f4988e98548c0602
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-260000
size: "30"
- id: 97930d6f4eecda673e2f3d7ec2983bce00b353792d1a9044b6477a3c51fcb185
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "120000000"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "68300000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-260000
size: "32900000"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48400000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "57800000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "116000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-260000 image ls --format yaml --alsologtostderr:
I1025 14:20:00.074279    2737 out.go:296] Setting OutFile to fd 1 ...
I1025 14:20:00.074467    2737 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:00.074471    2737 out.go:309] Setting ErrFile to fd 2...
I1025 14:20:00.074474    2737 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:00.074610    2737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:20:00.075059    2737 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:00.075124    2737 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:00.076106    2737 ssh_runner.go:195] Run: systemctl --version
I1025 14:20:00.076114    2737 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/functional-260000/id_rsa Username:docker}
I1025 14:20:00.105578    2737 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh pgrep buildkitd: exit status 1 (66.79125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image build -t localhost/my-image:functional-260000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 image build -t localhost/my-image:functional-260000 testdata/build --alsologtostderr: (1.396213875s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-260000 image build -t localhost/my-image:functional-260000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 6f20baed3331
Removing intermediate container 6f20baed3331
---> de1aedfd1565
Step 3/3 : ADD content.txt /
---> 84f62587177d
Successfully built 84f62587177d
Successfully tagged localhost/my-image:functional-260000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-260000 image build -t localhost/my-image:functional-260000 testdata/build --alsologtostderr:
I1025 14:20:00.225991    2741 out.go:296] Setting OutFile to fd 1 ...
I1025 14:20:00.226259    2741 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:00.226264    2741 out.go:309] Setting ErrFile to fd 2...
I1025 14:20:00.226266    2741 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 14:20:00.226405    2741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-1304/.minikube/bin
I1025 14:20:00.226879    2741 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:00.227628    2741 config.go:182] Loaded profile config "functional-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 14:20:00.228550    2741 ssh_runner.go:195] Run: systemctl --version
I1025 14:20:00.228561    2741 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17488-1304/.minikube/machines/functional-260000/id_rsa Username:docker}
I1025 14:20:00.257572    2741 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3827912253.tar
I1025 14:20:00.257626    2741 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 14:20:00.260779    2741 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3827912253.tar
I1025 14:20:00.262465    2741 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3827912253.tar: stat -c "%s %y" /var/lib/minikube/build/build.3827912253.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3827912253.tar': No such file or directory
I1025 14:20:00.262481    2741 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3827912253.tar --> /var/lib/minikube/build/build.3827912253.tar (3072 bytes)
I1025 14:20:00.269625    2741 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3827912253
I1025 14:20:00.272494    2741 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3827912253 -xf /var/lib/minikube/build/build.3827912253.tar
I1025 14:20:00.276031    2741 docker.go:341] Building image: /var/lib/minikube/build/build.3827912253
I1025 14:20:00.276077    2741 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-260000 /var/lib/minikube/build/build.3827912253
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1025 14:20:01.577320    2741 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-260000 /var/lib/minikube/build/build.3827912253: (1.301228167s)
I1025 14:20:01.577387    2741 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3827912253
I1025 14:20:01.580615    2741 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3827912253.tar
I1025 14:20:01.583412    2741 build_images.go:207] Built localhost/my-image:functional-260000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3827912253.tar
I1025 14:20:01.583426    2741 build_images.go:123] succeeded building to: functional-260000
I1025 14:20:01.583430    2741 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.598439166s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-260000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-260000 docker-env) && out/minikube-darwin-arm64 status -p functional-260000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-260000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-260000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-260000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-mxg96" [b778b482-bc6e-426d-81a1-b58b90477652] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-mxg96" [b778b482-bc6e-426d-81a1-b58b90477652] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.01249175s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr: (2.079035709s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr: (1.446903542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.579543292s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-260000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-260000 image load --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr: (1.860664958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image save gcr.io/google-containers/addon-resizer:functional-260000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image rm gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-260000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 image save --daemon gcr.io/google-containers/addon-resizer:functional-260000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-260000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2542: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-260000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ccd9f796-eae6-4258-ac2b-e0b3eface89b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ccd9f796-eae6-4258-ac2b-e0b3eface89b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.008847667s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service list -o json
functional_test.go:1493: Took "93.448167ms" to run "out/minikube-darwin-arm64 -p functional-260000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:30536
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:30536
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-260000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.41.166 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-260000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "120.309ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "37.67625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "117.014ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "38.651417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port143571658/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698268784394534000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port143571658/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698268784394534000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port143571658/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698268784394534000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port143571658/001/test-1698268784394534000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 21:19 test-1698268784394534000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh cat /mount-9p/test-1698268784394534000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-260000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e07f164c-5fd5-45c0-97c7-5196ccb67d7b] Pending
helpers_test.go:344: "busybox-mount" [e07f164c-5fd5-45c0-97c7-5196ccb67d7b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e07f164c-5fd5-45c0-97c7-5196ccb67d7b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e07f164c-5fd5-45c0-97c7-5196ccb67d7b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007632583s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-260000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port143571658/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port543950386/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.96675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port543950386/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "sudo umount -f /mount-9p": exit status 1 (66.00125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-260000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port543950386/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount1: exit status 1 (79.628208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount3: exit status 1 (64.336459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-260000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-260000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-260000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup561143949/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-260000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-260000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-260000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-069000 --driver=qemu2 
E1025 14:20:20.657954    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-069000 --driver=qemu2 : (31.828766792s)
--- PASS: TestImageBuild/serial/Setup (31.83s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-069000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-069000: (1.027447417s)
--- PASS: TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-069000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-069000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (70.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-187000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-187000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m10.713344042s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (70.71s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (20.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons enable ingress --alsologtostderr -v=5: (20.844972042s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (20.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-187000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-242000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1025 14:23:04.500802    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/addons-355000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-242000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.193214875s)
--- PASS: TestJSONOutput/start/Command (46.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-242000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-242000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-242000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-242000 --output=json --user=testUser: (12.078636833s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-065000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-065000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.913834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70999c59-62c5-474c-a07e-aefcd816a4d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-065000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b84570d4-dd0c-4314-806b-332757d98d1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"b8e27c81-937e-4635-ab1b-a89499cd8759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig"}}
	{"specversion":"1.0","id":"cf2494ca-0d13-41de-8d90-5e0c8f9bfd32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a53f36d-2b41-4d63-a8c4-568538e07603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c42f1bfa-9f6b-4c13-8757-c566d7691d94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube"}}
	{"specversion":"1.0","id":"ab4bef21-a49f-4188-b9f8-64f80375f0b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99a4ca61-eb0f-41b3-b224-f4ba3c5afd22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-065000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-065000
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (65.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-631000 --driver=qemu2 
E1025 14:24:06.790357    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:06.796703    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:06.808735    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:06.830801    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:06.872856    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:06.954892    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:07.116939    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:07.438979    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:08.081097    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:09.363197    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:11.925340    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:17.046853    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
E1025 14:24:27.289027    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-631000 --driver=qemu2 : (30.845773958s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-633000 --driver=qemu2 
E1025 14:24:47.771202    1723 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-1304/.minikube/profiles/functional-260000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-633000 --driver=qemu2 : (33.733908416s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-631000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-633000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-633000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-633000
helpers_test.go:175: Cleaning up "first-631000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-631000
--- PASS: TestMinikubeProfile (65.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-354000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.932208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-354000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-1304/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-1304/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-354000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-354000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.63425ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-354000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-354000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-354000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-354000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.593625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-354000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-750000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-750000 -n old-k8s-version-750000: exit status 7 (31.688042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-750000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-549000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-549000 -n no-preload-549000: exit status 7 (31.415792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-549000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-202000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-202000 -n embed-certs-202000: exit status 7 (32.388792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-202000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-040000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-040000 -n default-k8s-diff-port-040000: exit status 7 (31.838792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-040000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-155000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-155000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-155000 -n newest-cni-155000: exit status 7 (33.470709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-155000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/259)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-475000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-475000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-475000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-475000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475000"

                                                
                                                
----------------------- debugLogs end: cilium-475000 [took: 2.252932792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-475000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-475000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-974000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-974000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard